New HBA, but lost 10gbe Connection

Bobbiek04

Dabbler
Joined
Sep 27, 2016
Messages
40
Hello all, I recently upgraded my Freenas with a SAS HBA that is working great, but due to wanting to optimize PCIE lanes, I had to move my Intel 10gbe NIC to a new PCI slot. When I did this my connection speed dropped form 10gbe down to 1gbe. No other system changes were made. The server is connected directly to my windows machine on a different subnet so there isnt any hardware in between.

My thought is that there has to be something in the freenas settings that I am not thinking of.

Any information would be appreciated. Thanks!
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
You might want to check your motherboard manual and see if the combination of lanes and slots you are using is supported.

I know on mine, if x16 is used in one case, the x8 port near it becomes x4, for example.
 

Bobbiek04

Dabbler
Joined
Sep 27, 2016
Messages
40
I'm running a Supermicro X10SL7-F. After reviewing the user guide it doesn't mention any speed throttling based on number of used slots.

I did some more tinkering and found that if i force the connection to 10gbe on the windows side, it will connect and I get speeds in excess of 1gbe, but the connection fails after a few minutes before it reconnects and the cycle repeats. If I selece the 1gbe speed link things are rock solid.

Thoughts?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
On page 2-31 of the manual it mentions a jumper setting for SMB enable... how do you have that set?

Also look at the diagram on page 1-8
It seems to me that your x16 slot runs at only x8 and your x8 slot runs at only x4... maybe there's something in that?

Also, It seems to me that an Intel i3 CPU has only 16 lanes available in total, so it makes sense that the lane distribution goes like that.
 
Last edited:

Bobbiek04

Dabbler
Joined
Sep 27, 2016
Messages
40
I took a look at the jumpers and they are both in the default setting.

For S**ts and giggles, I swapped the two cards and am still having the same issue. My thought is that is has to be some setting that didn't transfer over?
 

Bobbiek04

Dabbler
Joined
Sep 27, 2016
Messages
40
I also just changed out the cable with a different cat 7 cable with factory ends, so its not the connection.
 

Bobbiek04

Dabbler
Joined
Sep 27, 2016
Messages
40
I did dome more playing around. I've disconnected the HBA and kept the NIC in the x16 slot, but still no luck. Windows will show a 10gbe link (attached screenshot)for about 4-6 minutes before the connection goes down for a moment then reconnects at 1gbe speed.

I also ran jperf within that few minute window and it shows anything but stable (attached).

Any ideas?
 

Attachments

  • Jperf result.JPG
    Jperf result.JPG
    186.3 KB · Views: 231
  • Windows Status.JPG
    Windows Status.JPG
    40.1 KB · Views: 233

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
@Bobbiek04 - Just wondering why are you using an HBA in your X10SL7? It's equipped with a built-in LSI 2308 HBA w/ 8 ports.

I own an All-in-One X10SL7-based system with an H200 HBA (for booting ESXi and storing the FreeNAS VM) and an Intel x520-DA1 10G NIC. The H200 is installed in the PCI-E v2.0 x4 slot, the NIC is installed in the PCI-E v3.0 x8 slot. I pass the motherboard 2308 HBA through to the FreeNAS VM.

I've included my BIOS PCI settings below in case they may be helpful to you.

I get 9.9 Gbits/sec bandwidth using the command-line iperf version v2.0.10 included with FreeNAS 11.2U8, and that's running over ESXi's VMX client driver with the ixgbe driver under-the-hood.

I had to use jumbo frames to get reliable bandwidth on my system -- your mileage may vary, and in fact jumbo frames are pretty much deprecated by knowledgeable users here (e.g., @jgreco). Nevertheless, they work for me.

I've also added several tunables which, again, may or may not help you out. I've attached a screenshot of these as well.

boomer-tunings.jpg boomer-bios.jpg
 

Bobbiek04

Dabbler
Joined
Sep 27, 2016
Messages
40
Progress! The 10gbe connection has been much more reliable after a motherboard BIOS update.

The reason for adding the HBA is to provide an upgrade path for the future. It's currently in a 16 bay chassis where I am not able to use all the bays without an HBA. Like most of us i have expanding pools of family videos/pictures that I want to ensure have a home in the freenas.

The primary function is file storage with a side of Plex. I haven't gone too far down the path of ESXi or VM's.

I am also using 11.2U8 with JPERF 2.0, but without the VM or hypervisor.

I took a look at the PCIe BIOS settings and mine look very different from yours (attached). I also adjusted the TCP & IP tuneables to match yours, with little change.

It seems like to connection is more stable now after the BIOS update, but bandwidth is still limited to ~2gbe. Could there be an issue in a recent Freenas update with non compatible drivers? What do you make of the BIOS settings?
 

Attachments

  • BIOS Options.JPG
    BIOS Options.JPG
    83.1 KB · Views: 233
  • PCIe BIOS Settings.JPG
    PCIe BIOS Settings.JPG
    83.4 KB · Views: 226
  • Jperf result after bios & tuning.JPG
    Jperf result after bios & tuning.JPG
    222.2 KB · Views: 229

Belphegor

Dabbler
Joined
Mar 21, 2020
Messages
11
Is your NIC installed in the PCIe extension slot labeled PCH Slot 5 on the motherboard? If so, you only get a PCIe 2.0 x4 link in that slot, which means 500 MB x4 = 2 GB/s.
 

Bobbiek04

Dabbler
Joined
Sep 27, 2016
Messages
40
@Belphegor The card is in that slot, but 2 gigabytes a second is 16 gigabits a second. I wouldn't think that it would impact 10 gigabit ethernet. Am I missing someting?
 

Belphegor

Dabbler
Joined
Mar 21, 2020
Messages
11
Theoretically it should be sufficient, but that's assuming the NIC is a PCIe 2.0 device which is designed to operate at 4x link correctly (since you don't mention the model of your NIC, that's difficult for me to say). Also, the DMI link between the CPU and PCH is limited to 2 GB/s as well, so other devices on the PCH might steal some bandwidth. I would suggest testing on the CPU SLOT6 just to be sure the NIC is not limited by the bus speed.
 
Top