Recommended PCIe to M.2 NVMe Adapter for Dell PowerEdge R730XD

Nixoid

Dabbler
Joined
Nov 20, 2023
Messages
13
That link is broken.
Thanks. Fixed it.
Tested. Working on r730 and r730XD
20231127_204821.jpg
20231127_204852.jpg
20231127_204903.jpg
 
Last edited:

Longochino

Cadet
Joined
Apr 2, 2024
Messages
1
Not a TrueNAS guy but this thread is trending highly for PEX8747 and i've been through all of this myself so I thought I could add some things.

The cheap $18 mechanical PCIe splitters are performing just 1 function of a $250 PCIe switch/bridge, the mechanical bifurcation. The switch does traffic management of the downstream PCIe devices over the entire uplink, so in other words every device gets the full x16 if they are the only one who needs it at that moment in time. That might not seem like a big deal for NVMe which only ever needed x4, but if you want a RAID10 then that's x16, fast NICs are usually x8 or x16, graphics cards are x16. When you combine this with the knowledge that the PEX8747 actually supports 48 lanes (16 for the upstream, and not x4x4x4x4 for the downstream as per these NVMe cards but 32 lanes however you want to provide them to the customer on your PCIe card. That means x8x8x8x8 is possible, but with standard PCIe headers not the M.2 which is capped at x4, and indeed it is C-Payne sells such x8x8x8x8 cards for $200 - cheaper than these 4x M.2 boards with half the downstream lanes exposed. x8 versions of these boards do exist, but they're really rare for some reason (crypto miners bought them all up and wont sell them, and no one cares about PCIe 3 anymore)

So it pays to know what you're getting with your switch. Some support downstream devices talking directly to each other, taking load off the CPU/root PCIe bus. I would think for some NAS applications that could be really useful. Network switch applications too.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I thought this was pretty cool.


8x x4 M.2 NVME into 16x PCIe4 or 3.

Uses a 48 lane PCIe switch.
 

NickF

Guru
Joined
Jun 12, 2014
Messages
763
I thought this was pretty cool.


8x x4 M.2 NVME into 16x PCIe4 or 3.

Uses a 48 lane PCIe switch.
But for the price...
1712093627955.png





1712093538674.png


This uses a 48-lane PLX chip, but PCI-E Gen3 instead of 4. https://www.broadcom.com/products/pcie-switches-retimers/pcie-switches/pex8749

But considering you can get 3 for the price of 1, I'd say its a better value and a better topology. You can build mirrors between two cards, get the same speed, but have physical redundancy. Plus U.2 gives you access to better drives. (wont work in this chassis tho, but I dont think the sonnet one would either)
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Yikes, that's pricey. But very cool.
Sonnet targets the Mac market… maybe there’s a cheaper option somewhere else… but I don’t know it. Afterall a Gen 3 48 lane switch is $250 right?
 
Top