SOLVED 10GbE FreeNAS servers are constrained when receiving network data

Status
Not open for further replies.

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Okay, so the binary isn't compatible. If you want to try the vmxnet3 you'll need to compile it from source against the version of freenas you're using. If you follow the instruction that I linked earlier in the thread you can do the compiling in a virtual box vm on your desktop and once you have the kernel specific built .ko copy it into the freeNAS VM your working on.
After more tinkering, I was able to load the stock VMware Tools v10.1.5 vmxnet3.ko module. Results were relatively unchanged for most of my test cases, though FreeNAS-to-FreeNAS bandwidth did improve a little bit:
Code:
iperf server	iperf client  Bandwidth
--------------  ------------  -----------------
ubuntu(falcon)  boomer(felix)  9.30 Gbits/second
bandit(falcon)  ubuntu(felix)  3.06 Gbits/second  
bandit(falcon)  boomer(felix)  3.24 Gbits/second  

ubuntu(felix)  bandit(falcon)  9.38 Gbits/second  
boomer(felix)  ubuntu(falcon)  7.72 Gbits/second
boomer(felix)  bandit(falcon)  5.67 Gbits/second

ubuntu(falcon)  ubuntu(felix)  9.18 Gbits/second
ubuntu(felix)  ubuntu(falcon)  9.08 Gbits/second
The interfaces came up as vmx3f0 and vmx3f1.

Last night I set up a FreeBSD VM for building FreeNAS; my next step will be to compile the vmxnet3 driver on it and see if it makes any difference... but honestly I doubt that it will. I'm hoping the move to FreeBSD 11 in the next release of FreeNAS will fix things up.
 

Deadringers

Dabbler
Joined
Nov 28, 2016
Messages
41
Hey,

So let's take a step back here and remove the "network" part as much as we can.

If you have both FreeNAS servers on the same host, and do iPerf on them, what do you get?

This should just be on the same Vswitch/subnet and will be limited by how fast the host can move bytes around in Memory.


In the mean-time, I'll setup some FreeNAS VMs here to see if it's physical constraint that is affecting all of us.
 

Deadringers

Dabbler
Joined
Nov 28, 2016
Messages
41
Okay so here is my testing between hosts.

TL;DR - it's not your hardware, I think there is a BUG with FreeNAS in some way.


Centos to Centos iPerf with your commands:

Code:
[ ID] Interval	   Transfer	 Bandwidth
[  3]  0.0- 5.0 sec  4.84 GBytes  8.32 Gbits/sec
[  3]  5.0-10.0 sec  5.35 GBytes  9.19 Gbits/sec
[  3] 10.0-15.0 sec  5.41 GBytes  9.29 Gbits/sec
[  3] 15.0-20.0 sec  5.41 GBytes  9.30 Gbits/sec
[  3] 20.0-25.0 sec  5.28 GBytes  9.07 Gbits/sec
[  3] 25.0-30.0 sec  5.42 GBytes  9.30 Gbits/sec
[  3] 30.0-35.0 sec  5.41 GBytes  9.29 Gbits/sec
[  3] 35.0-40.0 sec  5.43 GBytes  9.33 Gbits/sec
[  3] 40.0-45.0 sec  5.42 GBytes  9.32 Gbits/sec
[  3] 45.0-50.0 sec  5.41 GBytes  9.29 Gbits/sec
[  3] 50.0-55.0 sec  5.36 GBytes  9.21 Gbits/sec
[  3] 55.0-60.0 sec  5.42 GBytes  9.30 Gbits/sec
[  3]  0.0-60.0 sec  64.2 GBytes  9.18 Gbits/sec




So we're sure that the hardware can do 9+Gbps.


Now if I do a FreeNAS VM as a Server, and the Centos VM as the client I get this:
(9.10.2-U3) 4 CPU - 48 GB ram each. - specs of Freenas machine

Code:
[  3] local 172.16.159.6 port 60014 connected with 172.16.159.70 port 5001
[ ID] Interval	   Transfer	 Bandwidth
[  3]  0.0- 5.0 sec  2.03 GBytes  3.49 Gbits/sec
[  3]  5.0-10.0 sec  2.01 GBytes  3.45 Gbits/sec
[  3] 10.0-15.0 sec  1.99 GBytes  3.42 Gbits/sec
[  3] 15.0-20.0 sec  1.99 GBytes  3.42 Gbits/sec
[  3] 20.0-25.0 sec  2.07 GBytes  3.55 Gbits/sec
[  3] 25.0-30.0 sec  2.04 GBytes  3.50 Gbits/sec
[  3] 30.0-35.0 sec  1.96 GBytes  3.36 Gbits/sec
[  3] 35.0-40.0 sec  1.99 GBytes  3.42 Gbits/sec
[  3] 40.0-45.0 sec  1.90 GBytes  3.26 Gbits/sec
[  3] 45.0-50.0 sec  2.05 GBytes  3.51 Gbits/sec
[  3] 50.0-55.0 sec  1.99 GBytes  3.41 Gbits/sec
[  3] 55.0-60.0 sec  1.98 GBytes  3.41 Gbits/sec
[  3]  0.0-60.0 sec  24.0 GBytes  3.43 Gbits/sec



PS - I tested a Centos host on the same ESXI server and a separate one to rule out any network issues....
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Okay so here is my testing between hosts.

TL;DR - it's not your hardware, I think there is a BUG with FreeNAS in some way.


Centos to Centos iPerf with your commands:

Code:
[ ID] Interval	   Transfer	 Bandwidth
[  3]  0.0- 5.0 sec  4.84 GBytes  8.32 Gbits/sec
[  3]  5.0-10.0 sec  5.35 GBytes  9.19 Gbits/sec
[  3] 10.0-15.0 sec  5.41 GBytes  9.29 Gbits/sec
[  3] 15.0-20.0 sec  5.41 GBytes  9.30 Gbits/sec
[  3] 20.0-25.0 sec  5.28 GBytes  9.07 Gbits/sec
[  3] 25.0-30.0 sec  5.42 GBytes  9.30 Gbits/sec
[  3] 30.0-35.0 sec  5.41 GBytes  9.29 Gbits/sec
[  3] 35.0-40.0 sec  5.43 GBytes  9.33 Gbits/sec
[  3] 40.0-45.0 sec  5.42 GBytes  9.32 Gbits/sec
[  3] 45.0-50.0 sec  5.41 GBytes  9.29 Gbits/sec
[  3] 50.0-55.0 sec  5.36 GBytes  9.21 Gbits/sec
[  3] 55.0-60.0 sec  5.42 GBytes  9.30 Gbits/sec
[  3]  0.0-60.0 sec  64.2 GBytes  9.18 Gbits/sec




So we're sure that the hardware can do 9+Gbps.


Now if I do a FreeNAS VM as a Server, and the Centos VM as the client I get this:
(9.10.2-U3) 4 CPU - 48 GB ram each. - specs of Freenas machine

Code:
[  3] local 172.16.159.6 port 60014 connected with 172.16.159.70 port 5001
[ ID] Interval	   Transfer	 Bandwidth
[  3]  0.0- 5.0 sec  2.03 GBytes  3.49 Gbits/sec
[  3]  5.0-10.0 sec  2.01 GBytes  3.45 Gbits/sec
[  3] 10.0-15.0 sec  1.99 GBytes  3.42 Gbits/sec
[  3] 15.0-20.0 sec  1.99 GBytes  3.42 Gbits/sec
[  3] 20.0-25.0 sec  2.07 GBytes  3.55 Gbits/sec
[  3] 25.0-30.0 sec  2.04 GBytes  3.50 Gbits/sec
[  3] 30.0-35.0 sec  1.96 GBytes  3.36 Gbits/sec
[  3] 35.0-40.0 sec  1.99 GBytes  3.42 Gbits/sec
[  3] 40.0-45.0 sec  1.90 GBytes  3.26 Gbits/sec
[  3] 45.0-50.0 sec  2.05 GBytes  3.51 Gbits/sec
[  3] 50.0-55.0 sec  1.99 GBytes  3.41 Gbits/sec
[  3] 55.0-60.0 sec  1.98 GBytes  3.41 Gbits/sec
[  3]  0.0-60.0 sec  24.0 GBytes  3.43 Gbits/sec



PS - I tested a Centos host on the same ESXI server and a separate one to rule out any network issues....
I agree that it's probably a FreeNAS/FreeBSD bug. Your results w/ CentOS closely parallel mine with Ubuntu.
 

Deadringers

Dabbler
Joined
Nov 28, 2016
Messages
41
Update:

I tested the Centos VMs against a Physical FreeNAS server this time....

I got the expected 9+ Gbps throughput so it's a FreeNAS/VM/ESXI thing.
 

Deadringers

Dabbler
Joined
Nov 28, 2016
Messages
41
I agree that it's probably a FreeNAS/FreeBSD bug. Your results w/ CentOS closely parallel mine with Ubuntu.

Yeah I think it's bug reporting time...Not sure if this is enough to help them pin down what might be happening though..
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
I ran the tests between each FreeNAS VM and the Ubuntu VM on the same ESXi server and portgroup:
Code:
iperf server	iperf client  Bandwidth
--------------  ------------  -----------------
ubuntu(falcon)  bandit(falcon)  25.50 Gbits/second
bandit(falcon)  ubuntu(falcon)  3.03 Gbits/second

ubuntu(felix)  bandit(felix)  35.40 Gbits/second
bandit(felix)  ubuntu(felix)  5.86 Gbits/second

It really, really does look like there's a bug in FreeNAS/FreeBSD when it's a network data receiver; it can push out the bits just fine!
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Been delving more deeply into this issue the last few days.

I don't believe there are any problems with my hardware setup. I get near line rates when I run FreeNAS 'on-the-metal', and I get near line rates between Ubuntu VMs on ESXi. And this holds true despite everything I've tried, which includes FreeNAS tunables; multiple versions of ESXi ixgbe drivers; and using VMware's VMXNET3 Toolkit driver in lieu of the stock VMX FreeBSD code.

With a Ubuntu VM as the iperf server I get rates over 9Gbits/second from iperf client VMs running FreeNAS 9.10.2 U3, FreeBSD 11-STABLE, and Ubuntu itself. Windows 7 is the odd man out, achieving only 3-3.75Gbits/second.

But things go South with FreeNAS or FreeBSD 11-STABLE as the iperf server.

To recap: I have two ESXi servers. FELIX is an X10SL7-F system with a Xeon E3-1241 v3 and 32GB of RAM. FALCON is an X9DRi-LN4F+ system with dual Xeon E5-2660s and 128GB of RAM. Both are equipped with Intel X520-DA1 10GbE NICs and are running ESXi v6.0 U3. The Intel NICs connect to a Dell 5524P switch with Twinax cables. FreeNAS is version 9.10.2 U3, FreeBSD is FreeBSD 11-STABLE, Ubuntu is Ubuntu server 16.04.2 LTS, Windows 7 is Windows 7 Ultimate. The VMs all use the VMXNET3 driver. All rates are in Gigabits/second. Iperf servers are invoked with iperf -s -fg, iperf clients with iperf -c {hostname} -t60 -i5 -fg. NB: Iperf clients transmit data to an iperf server.
Code:
With FreeNAS running as the iperf server on FELIX I get these connection rates from VMs on FALCON:
2.45 FreeBSD
1.18 Windows 7
1.78 FreeNAS
5.31 Ubuntu

With FreeNAS running as the iperf server on FALCON I get these connection rates from VMs on FELIX:
2.98 FreeBSD
1.36 Windows 7
2.80 FreeNAS
2.94 Ubuntu

With FreeBSD running as the iperf server on FELIX I get these connection rates from VMs on FALCON:
5.64 FreeBSD
1.44 Windows 7
3.90 FreeNAS
8.96 Ubuntu

With FreeBSD running as the iperf server on FALCON I get these connection rates from VMs on FELIX:
4.25 FreeBSD
1.80 Windows 7
5.36 FreeNAS
4.88 Ubuntu

With Ubuntu running as the iperf server on FELIX I get these connection rates from VMs on FALCON:
9.39 FreeBSD
2.98 Windows 7
9.37 FreeNAS
8.75 Ubuntu

With Ubuntu running as the iperf server on FALCON I get these connection rates from VMs on FELIX:
9.38 FreeBSD
3.75 Windows 7
9.41 FreeNAS
9.26 Ubuntu


So Linux seems to get a lot of love from VMware. FreeBSD and Windows? Not so much. Ubuntu hums along at near line rates with no tweaking or tuning whatsoever. I'd hoped FreeNAS would behave the same way.

In keeping with Murpy's infernal Law, the rate I'm primarily interested in - i.e., the connection between my two FreeNAS VMs - is one of the slowest of the lot! Only Windows is slower!

The fact that both FreeBSD 11-STABLE and FreeNAS (based on FreeBSD 10.3) get really good rates connecting to Ubuntu as clients, but deliver such abysmal rates as servers, makes me think the problem may be a quirk or deficiency in the FreeBSD network code that somehow throttles receive rates.

Or perhaps not. Windows performs the worst in all cases, so poor performance isn't confined solely to FreeBSD. Or perhaps I'm missing something in my ESXi setup. Or my hardware setup.

In any case: I'm puzzled and disappointed, and welcome suggestions from the pros!
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
When you are running the iperf tests are you specifying the tcp window size? I'm wondering if the linux guest is defaulting to different tcp window size then the bsd guest.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,176
Yeah, that's something to keep in mind. Some defaults are somewhat dodgy with GbE and rather inappropriate for 10GbE. Considering realistic scenarios, of course.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
When you are running the iperf tests are you specifying the tcp window size? I'm wondering if the linux guest is defaulting to different tcp window size then the bsd guest.
Yes, sir. In my first round of testing I set a window size (see my original post at the top of this thread). I dropped it later because it didn't seem to have any effect.

In any case, the bandwidth I'm seeing between my two FreeNAS servers -- for example, when running rsync and replication jobs -- matches very closely with the iperf results.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Good news!

I just installed FreeNAS 11 VMs on both ESXi servers and ran the iperf tests. The rates run 4.33 to 4.39 Gigabits/second, a big improvement over FreeNAS 9.10.2 U3's paltry 1.78 to 2.8 Gigabits/second. This is more in line with the results I saw with FreeBSD 11-STABLE, which makes sense, since FreeNAS 11 is based on FreeBSD 11.

freenas11-10g-rates-are-better.jpg
 
Last edited:

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
It's the Jumbo Frames, stupid! :rolleyes:

I'm happy to report that, after enabling jumbo frames, I'm getting over 9 Gigabits/second rates on my two 10GbE-enabled FreeNAS virtual machines.

Doooohhhhh!

bandit-10-gbe-after-jumbo-frames.jpg
 

jp83

Dabbler
Joined
Mar 31, 2017
Messages
23
Just saw your resolution, but just curious why jumbo frames made that much difference. From my other reading I didn't think jumbo frames was really that much better, so didn't seem so obvious. Sure I could enable it for a storage network, but don't think I want to for sharing out to the rest of my network.

Is there still a FreeBSD ticket for improving performance of the VMXNET3 driver?
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Just saw your resolution, but just curious why jumbo frames made that much difference. From my other reading I didn't think jumbo frames was really that much better, so didn't seem so obvious. Sure I could enable it for a storage network, but don't think I want to for sharing out to the rest of my network.

Is there still a FreeBSD ticket for improving performance of the VMXNET3 driver?
I had the same impression re: jumbo frames. I tinkered with them a couple of years ago when I built my first All-In-One and they didn't have any appreciable effect on network performance. This forum post ("Jumbo frames notes") further reinforced the idea that they don't do much for performance.

But I was only running gigabit ethernet then. Enabling jumbo frames made a huge difference w/ my new 10Gigabit network.

During my testing I tried out FreeBSD 11-STABLE, which does seem to have an improved VMX driver. Because FreeNAS 11 is based on FreeBSD 11-STABLE, I'm hoping to see an add'l performance boost when I migrate to FreeNAS 11 later this year.
 

c32767a

Patron
Joined
Dec 13, 2012
Messages
371
I had the same impression re: jumbo frames. I tinkered with them a couple of years ago when I built my first All-In-One and they didn't have any appreciable effect on network performance. This forum post ("Jumbo frames notes") further reinforced the idea that they don't do much for performance.

But I was only running gigabit ethernet then. Enabling jumbo frames made a huge difference w/ my new 10Gigabit network.

During my testing I tried out FreeBSD 11-STABLE, which does seem to have an improved VMX driver. Because FreeNAS 11 is based on FreeBSD 11-STABLE, I'm hoping to see an add'l performance boost when I migrate to FreeNAS 11 later this year.

I've been tracking the same performance issues with Freenas 9 and 10G cards.. 9.10 has some issues with 10 gig that the RCs of FN11 seem to eliminate. I've been waiting for 11 to go to release status and do some more testing before I said anything.. :)

The benefit of Jumbo frames is larger chunks of data for the network stack to process. When a new connection speed comes along (1g, 10g, 25g) early network cards tend to require a lot of help from the CPU to process traffic. Over time as the network chipsets mature, more processing moves back to silicon. When most processing is offloaded, jumbos don't help too much, but when you have a network stack with a high interrupt load (like 10G on VMware) Jumbos can make a performance difference.

I would bet if you booted your ESX host in native FreeBSD11, jumbos wouldn't help much, but with ESX and VMXNET3, you've got a lot of software between you and the silicon, so jumbos provide some benefit.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
I've been tracking the same performance issues with Freenas 9 and 10G cards.. 9.10 has some issues with 10 gig that the RCs of FN11 seem to eliminate. I've been waiting for 11 to go to release status and do some more testing before I said anything.. :)

The benefit of Jumbo frames is larger chunks of data for the network stack to process. When a new connection speed comes along (1g, 10g, 25g) early network cards tend to require a lot of help from the CPU to process traffic. Over time as the network chipsets mature, more processing moves back to silicon. When most processing is offloaded, jumbos don't help too much, but when you have a network stack with a high interrupt load (like 10G on VMware) Jumbos can make a performance difference.

I would bet if you booted your ESX host in native FreeBSD11, jumbos wouldn't help much, but with ESX and VMXNET3, you've got a lot of software between you and the silicon, so jumbos provide some benefit.
Agree that it's a FreeNAS/BSD-on-ESXi problem. I got near line rates when I ran both FreeNAS and FreeBSD on the metal (see post #28 above).

Like you, I await with bated breath the release of FreeNAS 11! :)
 

BrianAz1

Dabbler
Joined
Aug 1, 2012
Messages
12
It's the Jumbo Frames, stupid! :rolleyes:

I'm happy to report that, after enabling jumbo frames, I'm getting over 9 Gigabits/second rates on my two 10GbE-enabled FreeNAS virtual machines.

Doooohhhhh!

View attachment 18368

Thanks for posting your findings! I was seeing ~ 9.5 Gbits/sec between my two Ubuntu 16.04 VMs but anything to or from my FreeNAS VM was ~ 3 Gbit/sec. This is all on the same ESXi 5.5 host using the VMXNET 3 network adapters.

After reading your adventure I confirmed that I did not have Jumbo Frames enabled on network (not on the vSwitch0, the Ubuntu iPerf Test VMs (mtu was 1500), nor FreeNAS).

I was confused a bit by the Ubuntu machines being able to hit 9.5Gbits/sec without Jumbo Frames but went ahead and made the changes anyway (mtu = 9000) on the switch, FreeNAS and my iPerf Test Ubuntu VMs. This immediately bumped my FreeNAS <-> UbuntuVM to what I had seen between the two Ubuntu Boxes.

.239 = Ubuntu iPerfTestClient VM
.244 = Ubuntu iPerfTestServer VMs
.101 = FreeNAS
Code:
brian@iPerfTestClient:~$ iperf -c 192.168.30.101

------------------------------------------------------------

Client connecting to 192.168.30.101, TCP port 5001

TCP window size:  325 KByte (default)

------------------------------------------------------------

[  3] local 192.168.30.239 port 46484 connected with 192.168.30.101 port 5001

[ ID] Interval	   Transfer	 Bandwidth

[  3]  0.0-10.0 sec  15.1 GBytes  13.0 Gbits/sec



Works the other way as well:



[root@freenas ~]# iperf -c 192.168.30.244																						 

------------------------------------------------------------																		

Client connecting to 192.168.30.244, TCP port 5001																				

TCP window size: 35.0 KByte (default)																							 

------------------------------------------------------------																		

[  6] local 192.168.30.101 port 22910 connected with 192.168.30.244 port 5001													 

[ ID] Interval	   Transfer	 Bandwidth																						

[  6]  0.0-10.0 sec  15.1 GBytes  12.9 Gbits/sec


*side note = the Ubuntu VMs went from 9.5Gbit/sec to 13Gbit/sec. That's nice too.

Very Happy. :)
 
Last edited:
Status
Not open for further replies.
Top