Various Infiniband Questions

m70b1jr

Cadet
Joined
Jun 28, 2022
Messages
3
Hey guys!
I just want to start off by saying I'm new to the forums, and kinda new to TrueNAS, and infiniband.
Long, long, long story short, we have 2 servers, 1 TrueNAS server, 1 Host server running proxmox both with Infiniband ConnectX-3 40GB Cards, and an Infiniband IS5022 switch.

After a very long time of figuring how to get infiniband to just show up on everything, We finally got everything communicating, however very poorly. iSCSI is slow, / unreliable, lots of packet drops, and iPerf usually shows a bandwith of 4gbit/s.

After some research, It seems like the cards should be in connected mode, instead of datagram, which allows MTU to go higher than 2044, in our case, since it's infiniband, we need an MTU of 65520. Increasing the MTU to 65520 on Proxmox / host end, and 9216 on TrueNAS (highest it can be set) increases iPerf performance to 10gigabit/s, but way more packet loss, because of the mismatched MTU I would assume.

So my questions are:
1.) How can we set MTU to 65520 on the TrueNAS end
2.) How can we verify the cards are in Connected mode (not datagram mode) on TrueNAS
3.) The monitoring for the cards on the dashboard show the link state as unknown, even though they are up and connected. Is there a fix for this?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
1.) How can we set MTU to 65520 on the TrueNAS end
Not sure you can... all the docs I see have a number like 16384 as the maximum under FreeBSD... the tool would be ifconfig (which you would then need to convert into a tunable to make permanent on reboot/upgrade).

2.) How can we verify the cards are in Connected mode (not datagram mode) on TrueNAS
Sounds like a driver specific thing... you might need to look into mellanox FreeBSD docs for that.

3.) The monitoring for the cards on the dashboard show the link state as unknown, even though they are up and connected. Is there a fix for this?
I suspect not, same as answer for 2.


You may generally need to consider TrueNAS SCALE if you're hitting limitations of FreeBSD that seem to not exist in Linux (hence Proxmox can do it).
 

m70b1jr

Cadet
Joined
Jun 28, 2022
Messages
3
Not sure you can... all the docs I see have a number like 16384 as the maximum under FreeBSD... the tool would be ifconfig (which you would then need to convert into a tunable to make permanent on reboot/upgrade).


Sounds like a driver specific thing... you might need to look into mellanox FreeBSD docs for that.


I suspect not, same as answer for 2.


You may generally need to consider TrueNAS SCALE if you're hitting limitations of FreeBSD that seem to not exist in Linux (hence Proxmox can do it).
I ended up switching to TrueNAS SCALE, and things work significantly better. Able to do everything I can do in proxmox, on scale, to make sure everything is equal. My issue now, is that I can't seem to get over 12 gigabit (in iperf3), regardless of MTU. It seems like theres very little, or even no performance impact going from 9216 -> 65520 MTU.

Any recommendations?
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
SCALE has had very little performance tuning during development.
 

m70b1jr

Cadet
Joined
Jun 28, 2022
Messages
3
SCALE has had very little performance tuning during development.
Yea, but if my cards can do 40 gigabit for infiniband, and I'm only getting 12, I wouldn't think that would be a TrueNas SCALE issue, maybe a configuration issue on my end.
 
Top