How many PCIe lanes and slots are needed when using multiple NVMe, 10Gbe and an HBA?

Status
Not open for further replies.

ujjain

Dabbler
Joined
Apr 15, 2017
Messages
45
How should you count PCIe slots and can you always split them with a riser card?

16 PCIe lanes should provide 16 gigabyte bandwidth per second, which should be enough for:
  • 3x NVMe SSD
  • 10Gbe NIC
  • HBA
But many motherboards only have 2 or 3 PCIe slots.

If you wish to connect all these 5 devices, can you increase the amount of slots with riser cards?

And does the 16x PCIe lanes limit of Intel CPU's count in lane usage? So 3x NVMe, 10Gbe, HBA would be 3x4+4+8=24 PCie lanes?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Some system boards do not support riser cards at all and if you "split" with a riser card, you potentially (depending on the technology) reduce the number of lanes available to the hardware connected to the riser. I was looking at a system that allows connection of many devices, but it only puts a single PCIe lane to each device. This limits performance because each lane has a finite amount of data it can carry in a given time.

If you want the full performance of all this hardware, you need minimum of 24 lanes, plus the lanes required for any of the hardware built into the system board. And that is if you are only using one HBA. I have four HBA cards in one of the servers at work and each one uses 8 lanes. I needed a dual socket Xeon to have enough lanes. That is one of the reasons the new AMD chips are interesting, many more lanes...
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
If you wish to connect all these 5 devices, can you increase the amount of slots with riser cards?
Depends on the motherboard and card. Some motherboards are setup to allow for multiple PCIe devices in one slot - but this is "buy everything from one vendor" territory, mostly.

What will always work is a PCIe switch. Problem is, they're insanely expensive (Thanks, Broadcom! /s) due to the quasi-monopoly situation. How expensive? Over 100 bucks, in quantity, just for the switch chip. That'll easily translate to an extra 200 bucks on the sticker price or more.

And does the 16x PCIe lanes limit of Intel CPU's count in lane usage? So 3x NVMe, 10Gbe, HBA would be 3x4+4+8=24 PCie lanes?
Well, the CPU only has so many lanes. You need more, you have to use a PCIe switch (Intel's PCH is mostly a cheap PCIe switch these days, with a PCIe 3.0 x4 uplink). The CPU also has limits on how the lanes can be configured (you can't do x1, x1, x1, x1, x4, x8) - to get around those, you need a switch.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
That is one of the reasons the new AMD chips are interesting, many more lanes...
And very fine-grained usage. IIRC, they can be split down to x1.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Ok, I'll see if my order of http://www.sicomputers.nl/xeon-bronze-3106-1-7ghz.html gets shipped and then hope I can find a good deal on a Supermicro motherboard.

Else I might consider getting a Xeon E5 v4 low-TDP ES from eBay.
I have used high core-count low clock speed CPUs and always been disapointed by them. I would not order anything less that 2.4 GHz and I would suggest that (for FreeNAS) that you want to be at a higher clock speed.
You are pretty much wasting your time thinking about fast drives if you are going to buy a slow CPU. What are you trying to accomplish?
 
Status
Not open for further replies.
Top