Virtual network performance issue? (1.2Gbit/sec)

Status
Not open for further replies.

skywise_ca

Dabbler
Joined
Jun 22, 2017
Messages
15
TLDR: What speed should a virtIO link between a VM and freenas be? (1.2Gbit/sec seems slow)

I'm looking at combining my current NAS (nas4free) and ESX server into a single FreeNAS11 box hosting VMs and NAS.

I used my current ESX box(Opteron 6378 with 64G ram) as a testbed so I could benchmark FreeNAS, installing a fresh 11.0Release on it.
I installed a ubuntu 16.04 server distro VM (4 core 4G RAM) on it, using virtIO for both disk and network.

Using iperf3, I tested the network speed between the ubuntu VM and the freeNAS host.
Code:
[ ID] Interval		   Transfer	 Bandwidth	   Retr
[  4]   0.00-10.00  sec  1.43 GBytes  1.23 Gbits/sec	0			 sender
[  4]   0.00-10.00  sec  1.43 GBytes  1.23 Gbits/sec				  receiver

The box has a 10G card in it, linking to the nas4free box and iperf3 between those two are:
Code:
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval		   Transfer	 Bandwidth	   Retr
[  4]   0.00-10.00  sec  11.5 GBytes  9.89 Gbits/sec	0			 sender
[  4]   0.00-10.00  sec  11.5 GBytes  9.89 Gbits/sec				  receiver

So, the hardware itself has lots of punch to push the bits.
(I tested the Intel emulation as well, as expected, it was worse)

I also tested the CPU between ESX and bhyve and that came out pretty good.
ESX: https://browser.geekbench.com/v4/cpu/3196655
bhyve: https://browser.geekbench.com/v4/cpu/3195634

I didn't test the disk but since it's ZFS I'd guess it'd be pretty solid.

My only issue is the slow network.
What are others getting between their VMs and the freeNAS host?
 

Zwck

Patron
Joined
Oct 27, 2016
Messages
371
This is not related to your topic!

After seeing your benchmark, i was very curious about the geekbench system so i tested the RancherOS docker container for some general benchmarking
docker run chrisdaish/geekbench
http://browser.primatelabs.com/geekbench3/8389324
 

Zwck

Patron
Joined
Oct 27, 2016
Messages
371
ok, after reading about what information you need.
note to my setup:

Code:
Freenas 11 <-> VM-bhyve-RancherOS-Server <-> Docker-containers
192.168.0.2 <-> 192.168.0.16 <-> 127.17.0.4


Benchmark within RancherOS and 2 Docker containers
Code:
[rancher@rancher ~]$ docker run  -it --rm networkstatic/iperf3 -c 172.17.0.4
Connecting to host 172.17.0.4, port 5201
[  4] local 172.17.0.5 port 38650 connected to 172.17.0.4 port 5201
[ ID] Interval		   Transfer	 Bandwidth	   Retr  Cwnd
[  4]   0.00-1.00   sec  4.62 GBytes  39.7 Gbits/sec  315	742 KBytes
[  4]   1.00-2.00   sec  4.73 GBytes  40.6 Gbits/sec	0	841 KBytes
[  4]   2.00-3.00   sec  4.90 GBytes  42.1 Gbits/sec	0	882 KBytes
[  4]   3.00-4.00   sec  5.03 GBytes  43.2 Gbits/sec	1	882 KBytes
[  4]   4.00-5.00   sec  5.15 GBytes  44.2 Gbits/sec	1	882 KBytes
[  4]   5.00-6.00   sec  5.20 GBytes  44.7 Gbits/sec	0	882 KBytes
[  4]   6.00-7.00   sec  5.34 GBytes  45.9 Gbits/sec	0	882 KBytes
^C[  4]   7.00-7.73   sec  3.92 GBytes  46.1 Gbits/sec	0	882 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval		   Transfer	 Bandwidth	   Retr
[  4]   0.00-7.73   sec  38.9 GBytes  43.2 Gbits/sec  317			 sender
[  4]   0.00-7.73   sec  0.00 Bytes  0.00 bits/sec				  receiver



Benchmark from Freenas - RancherOS-VM-iper-container (iperf not iperf3)
Code:
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 172.17.0.4 port 5001 connected with 192.168.0.2 port 20022
[ ID] Interval	   Transfer	 Bandwidth
[  4]  0.0-10.0 sec  9.55 GBytes  8.20 Gbits/sec


does this help?

For reference:
Code:
docker run --restart=unless-stopped --name=iperf -d -p 5001:5001 mlabbe/iperf 
iperf -c IP.IP.IP.IP
 
Last edited:

skywise_ca

Dabbler
Joined
Jun 22, 2017
Messages
15
ok, after reading about what information you need.
note to my setup:

Code:
Freenas 11 <-> VM-bhyve-RancherOS-Server <-> Docker-containers
192.168.0.2 <-> 192.168.0.16 <-> 127.17.0.4


Benchmark within RancherOS and 2 Docker containers
Code:
[rancher@rancher ~]$ docker run  -it --rm networkstatic/iperf3 -c 172.17.0.4
Connecting to host 172.17.0.4, port 5201
[  4] local 172.17.0.5 port 38650 connected to 172.17.0.4 port 5201
[ ID] Interval		   Transfer	 Bandwidth	   Retr  Cwnd
[  4]   0.00-1.00   sec  4.62 GBytes  39.7 Gbits/sec  315	742 KBytes
[  4]   1.00-2.00   sec  4.73 GBytes  40.6 Gbits/sec	0	841 KBytes
[  4]   2.00-3.00   sec  4.90 GBytes  42.1 Gbits/sec	0	882 KBytes
[  4]   3.00-4.00   sec  5.03 GBytes  43.2 Gbits/sec	1	882 KBytes
[  4]   4.00-5.00   sec  5.15 GBytes  44.2 Gbits/sec	1	882 KBytes
[  4]   5.00-6.00   sec  5.20 GBytes  44.7 Gbits/sec	0	882 KBytes
[  4]   6.00-7.00   sec  5.34 GBytes  45.9 Gbits/sec	0	882 KBytes
^C[  4]   7.00-7.73   sec  3.92 GBytes  46.1 Gbits/sec	0	882 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval		   Transfer	 Bandwidth	   Retr
[  4]   0.00-7.73   sec  38.9 GBytes  43.2 Gbits/sec  317			 sender
[  4]   0.00-7.73   sec  0.00 Bytes  0.00 bits/sec				  receiver



Benchmark from Freenas - RancherOS-VM-iper-container (iperf not iperf3)
Code:
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 172.17.0.4 port 5001 connected with 192.168.0.2 port 20022
[ ID] Interval	   Transfer	 Bandwidth
[  4]  0.0-10.0 sec  9.55 GBytes  8.20 Gbits/sec


does this help?

For reference:
Code:
docker run --restart=unless-stopped --name=iperf -d -p 5001:5001 mlabbe/iperf
iperf -c IP.IP.IP.IP
Ok, so good speeds are possible.
The one difference is you're using docker and I was using bhyve. (not sure what the difference inside is though)
That transfer between freenas and your docker instance... are you linking to your main interface/IP for the FreeNAS host? (same IP as you normally reach FreeNAS from within your network)

I'll have to put the test FreeNAS system back up and do some more testing.

Thanks for testing for me!
 

Zwck

Patron
Joined
Oct 27, 2016
Messages
371
So the docker host VM (RancherOS) is the only VM that i have running, so i had to improvise.

But its 10GiBts between Freenas and the VM in general. The connection speed between running "containers" (little VM within the VM) is 40Gbits. I am not sure what the theoretical values would be.
 

skywise_ca

Dabbler
Joined
Jun 22, 2017
Messages
15
So the docker host VM (RancherOS) is the only VM that i have running, so i had to improvise.

But its 10GiBts between Freenas and the VM in general. The connection speed between running "containers" (little VM within the VM) is 40Gbits. I am not sure what the theoretical values would be.
Ok, I've got my testbed running again, installed 2 VMs so I could test between them.
I can only get around 2gbit/sec between them.
So, if my reading of docker in FreeNAS is right, you're running rancherOS as a bhyve VM and then docks inside of that?
If so, then my speeds shouldn't be (much) different from yours, just a bit because of general CPU performance.
Sooo, I'm doing something wrong.

I've tried running 'vm' commands from freenas itself and it's complaining that the VM system isn't in the rc. I think I'm running into problems with older versions of VM in freenas, things have changed with freenas 11.

Something else I've noticed, when I start the first VM, freenas itself goes completely offline for 10-20 seconds, while it's setting up the bridge/tap interfaces I think. Is that normal?
 

Zwck

Patron
Joined
Oct 27, 2016
Messages
371
Ok, I've got my testbed running again, installed 2 VMs so I could test between them.
I can only get around 2gbit/sec between them.
So, if my reading of docker in FreeNAS is right, you're running rancherOS as a bhyve VM and then docks inside of that?
If so, then my speeds shouldn't be (much) different from yours, just a bit because of general CPU performance.
Sooo, I'm doing something wrong.

I've tried running 'vm' commands from freenas itself and it's complaining that the VM system isn't in the rc. I think I'm running into problems with older versions of VM in freenas, things have changed with freenas 11.

Something else I've noticed, when I start the first VM, freenas itself goes completely offline for 10-20 seconds, while it's setting up the bridge/tap interfaces I think. Is that normal?

Sorry for the late reply, i was on vacation for the lats 2 weeks.

Yes, i run a RancherOS + Rancher as an administrative tool + containers, and tested within this environment. I have not done any tests from different VM to RancherOS, i have another Ubuntu up and running, i can check that as well. I don't think you do anything wrong tbh.
 
Status
Not open for further replies.
Top