Interpreting and Understanding iperf Results

Status
Not open for further replies.
Joined
Nov 24, 2013
Messages
3
I have recently created a ESXi FreeNAS AIO machine which is physically located on a ML150 G6. Aside from hating the physical internal storage configurations it has been a fairly wonderful setup.

Now I am at the point of performance testing and benchmarking the configuration and I am having a hard-time interpreting and understanding some basic iperf results I have generated. Before I present my results let me first give you a setup of the configurations.

Ubuntu 12.04.3 LTS
1 vCPU
1024 MB of RAM
2 VMXNET3 vNICs [172.16.0.3]

FreeNAS-9.1.1-RELEASE-x64 (a752d35)
2 vCPU
12268MB of RAM
2 VMXNET3 vNICs [172.16.0.2]

There are 2 networks on the ESXi host and both VMs are connected to each network. The network that I am profiling with iperf is a purely virtualized network (i.e., no physical NICs or switches) and is happening on the 172.16.0.0/12 subnet.

I am having trouble reconciling the extremely asymmetrical nature of my TX/RX between the two VMs.

This iperf run the server is on Ubuntu. It is my understanding this is TX from FreeNAS to Ubuntu.

Code:
# iperf -t 120 -i 10 -c 172.16.0.3
------------------------------------------------------------
Client connecting to 172.16.0.3, TCP port 5001
TCP window size: 32.5 KByte (default)
------------------------------------------------------------
[  3] local 172.16.0.2 port 46452 connected with 172.16.0.3 port 5001
[ ID] Interval      Transfer    Bandwidth
[  3]  0.0-10.0 sec  9.43 GBytes  8.10 Gbits/sec
[  3] 10.0-20.0 sec  10.1 GBytes  8.67 Gbits/sec
[  3] 20.0-30.0 sec  10.5 GBytes  9.04 Gbits/sec
[  3] 30.0-40.0 sec  10.0 GBytes  8.63 Gbits/sec
[  3] 40.0-50.0 sec  10.7 GBytes  9.16 Gbits/sec
[  3] 50.0-60.0 sec  10.6 GBytes  9.10 Gbits/sec
[  3] 60.0-70.0 sec  10.6 GBytes  9.07 Gbits/sec
[  3] 70.0-80.0 sec  10.1 GBytes  8.69 Gbits/sec
[  3] 80.0-90.0 sec  10.5 GBytes  9.02 Gbits/sec
[  3] 90.0-100.0 sec  8.82 GBytes  7.57 Gbits/sec
[  3] 100.0-110.0 sec  10.1 GBytes  8.72 Gbits/sec
[  3] 110.0-120.0 sec  10.6 GBytes  9.07 Gbits/sec
[  3]  0.0-120.0 sec  122 GBytes  8.74 Gbits/sec


This iperf run the server is on FreeNAS. It is my understanding this is TX from Ubuntu to FreeNAS.

Code:
$ iperf -t 120 -i 10 -c 172.16.0.2
------------------------------------------------------------
Client connecting to 172.16.0.2, TCP port 5001
TCP window size: 23.5 KByte (default)
------------------------------------------------------------
[  3] local 172.16.0.3 port 53078 connected with 172.16.0.2 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  1.55 GBytes  1.33 Gbits/sec
[  3] 10.0-20.0 sec  1.62 GBytes  1.39 Gbits/sec
[  3] 20.0-30.0 sec  1.55 GBytes  1.34 Gbits/sec
[  3] 30.0-40.0 sec  1.72 GBytes  1.48 Gbits/sec
[  3] 40.0-50.0 sec  1.63 GBytes  1.40 Gbits/sec
[  3] 50.0-60.0 sec  1.56 GBytes  1.34 Gbits/sec
[  3] 60.0-70.0 sec  1.53 GBytes  1.32 Gbits/sec
[  3] 70.0-80.0 sec  1.62 GBytes  1.40 Gbits/sec
[  3] 80.0-90.0 sec  1.58 GBytes  1.35 Gbits/sec
[  3] 90.0-100.0 sec  1.48 GBytes  1.27 Gbits/sec
[  3] 100.0-110.0 sec  1.75 GBytes  1.50 Gbits/sec
[  3] 110.0-120.0 sec  1.50 GBytes  1.29 Gbits/sec
[  3]  0.0-120.0 sec  19.1 GBytes  1.37 Gbits/sec


As you can see the network performance overall is respectable. In the worst case (Ubuntu to FreeNAS) the transmission speeds are 1.37 Gbits/sec.

All that having been said, my question, why the ~6x speed-up depending on the direction of the TX? Is it purely based on the resource allocation difference between the two VMs? Other than just adding more RAM and vCPUs to the Ubuntu guest is there a way to determine causes (other utilities, logs, etc...)?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Here's some info that may help:

1. Notice the TCP window size is different. That can have a big performance impact on the bandwidth you appear to have.
2. Since you are running VMs using these solid numbers as you see them is not the best idea in the world. CPU usage from other VMs and the relationship between the VMs and ESXi can have a very profound effect on the actual numbers you get. 3. You need to keep in mind how vCPUs are allocated. Having 2vCPUs assigned versus 1vCPU or 3vCPU can cause your benchmark numbers to be totally inaccurate in relationship to the system loading at that time. And you can't trust that if ESXi says CPU usage is 3% that you aren't CPU bottlenecked.

Lastly, and you'll hate to hear this, but we really don't provide much support with VMed FreeNAS boxes(especially regarding performance or data recovery). Virtualizing requires finesse on your hardware requirements and what the VMs need. Generally speaking, it's not something that can be done in a forum. Virtualizing abstracts a whole bunch of potential issues and a simple misconfiguration on your part can cause all sorts of weird and wacky performance and reliability issues. So the simple solution? We tell them "good luck". Usually the issue is non-obvious(as appears to potentially be in your case). Since you aren't running bare metal the list of possible causes is very high.
 
Status
Not open for further replies.
Top