Safe to use a Dual M.2 PCIE Adapter?

path

Dabbler
Joined
Jul 5, 2013
Messages
46

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
That device will allow for one NVME drive at x2 speed and the other as SATA at SATA 3 speed.

There are other devices (if your motherboard supports PCIe slot bifurcation) available which can allow up to 4 NVME drives at x4 in an x16 slot.

Saftey is another thing entirely. You're looking at solid state here, so more-or-less no moving parts to fail, so what works should continue to do so until the SSDs wear out. Watch out for cooling as it's traditionally not an area of a case that has strong airflow and some NVME devices can get very hot (and have shorter lives if they get too hot).
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
I am using (testing) a 4 way NVME adapter (from Asus) by bifurcating a 16 way slot to X4X4X4X4
At this point I can say it works - just haven't got around to going any further yet
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
Supermicro also makes the


which has a PLC switch on it, doesn't require bifurcation support, and handles up to four NVMe devices on an x8 slot.
How does that work. You have 4 * NVME requiring 4 lanes each = 16 lanes but you only have an 8 lane card.
If I created a pool with 4 NVME drives on this card then surely ZFS is going to have an issue with losing contact with the drives (briefly) or having to wait for a drive to become available.
Surely there has to be a tradeoff, somewhere
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
How does that work. You have 4 * NVME requiring 4 lanes each = 16 lanes but you only have an 8 lane card.
If I created a pool with 4 NVME drives on this card then surely ZFS is going to have an issue with losing contact with the drives (briefly) or having to wait for a drive to become available.
Surely there has to be a tradeoff, somewhere

It's a PCIe switch.

Before you say anything, stop for a moment and ponder how PCIe works in a dual CPU system. Surely, you do not think that CPU2 in a dual CPU system cannot reach peripheral devices attached to CPU1's PCIe lanes? (the astute reader will say "or from the PCH") PCIe connectivity is not as straightforward as just "lanes".

People get all bent out of shape over PCIe lanes on cheap single CPU systems because cheap single CPU systems have a limited number of lanes, and most consumers are unwilling to pay significant costs for something like a PLX switch chip or a fancy CPU with lots of lanes.

However, PCIe switches really do exist and they are somewhat analogous to SAS expanders, in that they can take a PCIe lane, or lanes, from the CPU or PCH, and then "switch" traffic as needed to more PCIe devices. This really isn't anything stunning or new; some of us still have cards around with PCI (non-e) bridge chips, and is a time honored way to add I/O capacity to a system.

So there is a mild downside to a PLX switch in that there is an aggregate bandwidth limit, and some mild additional latency. However, these tradeoffs can be worth it when the choice is between being able to do the cool thing and NOT being able to do the cool thing. So if you really need four M.2 NVMe Samsung 980 PRO 2TB gumsticks in a single x8 slot, there's a solution for that, and, it's pretty awesome. Not quite as awesome as bifurcation in an x16 slot, but, then again, it doesn't require an x16 slot or bifurcation support.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
4 * NVME requiring 4 lanes each = 16 lanes

Incidentally, you misspelled the words "that would like" as "requiring" ;-) There is no such requirement. I have used x1 in several cases for NVMe SSD just for convenience reasons, and that works fine.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680

path

Dabbler
Joined
Jul 5, 2013
Messages
46

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
Asus Hyper M.2 X16 V2. Only does PCIe 3 - but that doesn't concern me at this point.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
Incidentally, you misspelled the words "that would like" as "requiring" ;-) There is no such requirement. I have used x1 in several cases for NVMe SSD just for convenience reasons, and that works fine.
You are of course correct - I use a PCIe single lane NVMe adapter for testing M.2 NVMe sticks in my other PC (its the only slot it has left)
 
Top