Performance of 100Gb network

skyyxy

Contributor
Joined
Jul 16, 2016
Messages
136
Hi everybody here.
I have 2 Mellanox MC415A 100Gb cards with PCI-E 16X, one for Freenas11.2U7 server with E5-1620V3+32GB RAM+ 2TB INTEL 760P NVME(just for test). Another for windows10 with E3-1230v2+z77+12Gram and the MC415A plugin PCI-E16X slot.
I know may be intel 760P nvme not enough for 100Gb but I just for test and sholud not the Performance like the attachment picture (almost same like 10Gb). May be somewhere I need to optimize like 10Gb in freenas tunable option? but I don't know how. If someone has experience or suggest please tell me. Thanks a lot.
 

Attachments

  • 111.jpeg
    111.jpeg
    88.4 KB · Views: 284

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
While I do not have practical experience with 10 GBit networking performance in particular, the general rules for performance testing/tuning apply here. So, apart from searching the forums and the wider Internet, you would need to understand what "building blocks" (hard- and software) influence the result you see. With that knowledge established, you can look at the individual components and whether or not they behave according to expectations. And once one bottleneck is addressed, the whole thing starts over again.

I know it sounds tedious, but the alternative is "to shoot into the forest" and hope. Initially the latter can work surprisingly well, but diminishing returns come fast. Also, it is not uncommon to make things worse during the process. Lastly, it also needs time and I personally find it more rewarding to understand stuff rather than just guess.
 

no_connection

Patron
Joined
Dec 15, 2013
Messages
480
You are looking at 0.01ms roundtrip TCP latency at 1500mtu (0.07ms for 9000). So things like interrpt handling and memory bandwidth and so on do take it's toll.
I don't have 100G or 10G experience. But I would try pullig mode instead of interupt and calculate how often it needs pulling with regards to inbuilt buffer. Shot in the dark tho.
 
Top