Jumbo Frames problem in FreeNAS 11.1 U6

Status
Not open for further replies.

techzillion

Dabbler
Joined
Dec 22, 2018
Messages
11
Hello,

New poster here and newbie FreeNAS administrator/user. I am running into a problem using Jumbo Frames (JF) in FreeNAS with my Hyper-V Server over iSCSI.

My current setup is this:
FreeNAS 11.1 U6 - ASUS P5BV-M server board with Intel Xeon X3220 and 8 GB ECC Ram. The motherboard has two onboard broadcom nics which are identified as bge0 and bge1. I since learned these nics do not support JF so I added an additional dual nic which is the HP NC360T and it contains the Intel pro 1000 chipset. The HP card identified as em0 and em1 and supports JF. The datastore is RaidZ2 with 6 Seagate Constellation 1 TB 7200 RPM SATA3 drives. The drives are connected to an LSI SAS9211-8i HBA with the IT firmware applied.

Hyper-V Server 2016 - ASRock E3C204-4L server board with Intel Xeon E3-1230v2 and 32 GB ECC Ram. The motherboard has four onboard Intel nics which are chipset 82574L. I have enabled 9k JF on each nic and rebooted. The VM boot disks reside on Kingston KC300 SSD drives in RAID6 on a LSI Megaraid 9285CV-8i card with latest firmware.

Hyper-V is connected directly to FreeNAS using a crossover cable and communicates by iSCSI. I have a Cisco SG300-20 switch which has JF enabled. Based on my research, there is no JF configuration for vswitches in Hyper-V.

In the FreeNAS web interface, I configured em0 and em1 for mtu 9000 and rebooted the server. It may be that BSD has poor support for JF with em as when I ping with 8k packets from Hyper-V to FreeNAS, several of the ping attempts have very high latency. Whenever I ping 8k packets to any of my other devices such as router, switch, utm, or even other VMs, they all respond with <1 ms latency. But when I ping FreeNAS with 8k packets, some responses will be <1 ms while others will shoot up into 100 ms range.

It gets even worse whenever I try to start a VM that uses the datastore used by iSCSI. The VM will hang on starting, and the ping latency goes into the 2000 ms range.

I have tried a second HP NC360T card thinking the first was defective but it had the same result. Additionally, I tried lowering the mtu to 4088 on both Hyper-V and FreeNAS but that did not do any better.

So then I just turned off JF between FreeNAS and Hyper-V but I still see latency. I also noticed in the console that it shows a warning, no ping reply (NOP-Out) after 5 seconds, dropping connection with and without JF enabled. So removed em1, added back bge1 with same network settings and reconnected the crossover cable to bge1 and restarted iSCSI on both Hyper-V and FreeNAS and the connection worked immediately without issue.

So I guess the question is, does FreeNAS or BSD have poor support for the HP NC360T? Is there a better card out there that is affordable, stable, and can do JF without all this latency and the hiccups?

The whole point of this is to see if I can increase the throughput for writing to the datastore as currently the read and write speed are only about 40 MB/s when copying a large file up or down. Being on a gigabit network I am trying to fully saturate the connection at 120 MB/s. This is a home lab network and I am the only user.
 
Last edited:
Joined
Dec 29, 2014
Messages
1,135
My experience is that jumbo frames is rarely the panacea that people expect it to be. There are some specific cases where it helps (sometimes a lot), but those are few and far between. You absolutely do not need JF to be able to fill a gigabit network on FreeNAS. Your memory is at the bare minimum, and that doesn't help. Do some tests with iperf (or iperf3) to see what the network throughput is with involving storage. If that is good, then your issue isn't the networking side of things.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457

techzillion

Dabbler
Joined
Dec 22, 2018
Messages
11
Thanks and sorry for the delayed reply. I ran iperf3 and it shows a sustained transfer rate of 112 MB/s so maybe it is just windows explorer overhead slowing it down. I gave up on JF and determined both cards to be defective since there was severe latency connecting to the host drive over iSCSI once connection was initiated. 8 GB is the max the motherboard will support. Money is tight you see so I am trying to get as much as I can out of what I have.

Reverting the previous config without JF, I was able to somehow get a sustained write speed of 75 MB/s when copying a large file to a network share so maybe the 40 MB/s was a fluke.

I'll consider ESXi once they provide native support for barebone disk encryption with preboot auth. 10gbe is not an option at this time, the cards and switches are too expensive for me.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
I'll consider ESXi once they provide native support for barebone disk encryption with preboot auth
That will never happen...
 
Status
Not open for further replies.
Top