Any adapters for a x16 PCIe slot split in multiple smaller ones?

rdfreak

Dabbler
Joined
May 22, 2019
Messages
12
I have a single PCIe 3 x16 slot currently occupied by an LSI 9311-8i HBA. The HBA takes 8 lanes, and I'd like to see whether I can somehow have a splitter card (if that's something that exists, or if that's the correct name for it) that can occupy a full PCIe x16 slot, but expose two additional x8 slots. One of the smaller slots should accept the HBA, while the other one - maybe a PCIe x8 to two M.2 NVMe slots adapter? So that I can use a couple of NVMe SSDs for ZIL/SLOG or a dedicated pool for Git repos.

Is that doable? According to the mobo's manual (ASRock Rack X570D4I-2T), and its UEFI setup, the x16 slot can be configured in 3 modes: single x16, x8x8, and x8x4x4.

(I gotta be honest, this might be more like a how do I google request, because when I search online, I don't get any satisfactory results; yet the mobo's ability to split PCIe lanes makes me think there's enough demand to implement such a feature, so I might be phrasing my search queries incorrectly )
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Welcome.

What you're looking for is done quite often in the rackmount world - your power words here for searching are probably "PCIe riser" - and your board supports heterogenous bifurcation (splitting a slot into the unequal x8x4x4 config) so you do have the motherboard for it, but I'm assuming this is in a desktop or tower style case, so physically mounting the cards after they've been split out (or the riser itself) is likely going to be your challenge.

A quick search for something more "universal" brought me to this "unique" device:

riser-left.jpg riser-right.jpg

I've never tested it, but allegedly it will split a single x16 slot into an x8 (side) x4 (top, mechanically still PCIe x4) and x4 in M.2 2280 format.

Found the seller through their [H]ardOCP profile - I have no affiliation to them and haven't bought any products.


I can appreciate the mad-scientist engineering here certainly, but it still scares me a bit and doesn't (directly) solve your mounting issue. You'd have to do some creative 3D printing/metalworking/etc to brace it in a desktop/tower, most likely.

Edit: The slot on the top can use a low-profile card directly, and mount on the regular bracket screw, so that would add stability there. But you'd need to use a flexible cable like one of the ADT-TECH cables referenced on the vendor site or similar to get the LSI HBA working off the side port ... and then you're running your HBA through bifurcation off a flexible cable.

Of course, if you're in a rackmount case, you might be able to find a regular angled riser that could do the trick; although most of them are designed with a specific board in mind, and I can't find that your board has matching ones (usually one isn't tucking an ITX board into a rackmount case)

Circling back to the root issue of wanting to mount a couple NVMe SSDs - are you sure your workload would benefit from an SLOG? A git commit shouldn't necessarily need to be synchronous, but for many small files I can understand the desire to go all-flash. Would SATA/SAS SSD be viable for that?
 
Last edited:

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Still digging, it looks like the same seller makes an option ideal for your use case:


Low_Profile_x8xM2xM2_rev2_photo2_2048x2048.jpg Low_Profile_x8xM2xM2_rev2_photo_2048x2048.jpg

x16 on the bottom, x8 on the top (aligned for low-profile brackets) and an M.2 2280 on each side. Vendor states "22110 is not possible on this revision" so perhaps there's one coming that can handle that, but that would let you swap the bracket on your HBA, mount it on top, and attach a pair of M.2 2280 size NVMe SSDs one on each side. You'll want to give the slot a good degree of linear airflow though as you're now going to be packing two M.2 SSD's and an HBA in the same area - make sure to keep the thermals under control.

I have to tip my hat to the engineering shown here, it's quite brilliant.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
A quick search for something more "universal" brought me to this "unique" device:

I've never tested it, but allegedly it will split a single x16 slot into an x8 (side) x4 (top, mechanically still PCIe x4) and x4 in M.2 2280 format.
The device works as advertised and I can recommend the seller.
But you'll need some PCIe extension cable from the side slot, further raising the cost. And a bit of fiddling to secure everything in a case (to make it worse, half-height brackets put the screw in a different position than full-height brackets).

The second device found by @HoneyBadger should be an easier fit if you do go this way.
Another option, if the case is large, would be a x8x8 card on extender cable, put flush with the motherboard. But if the case is large, selling the mini-ITX motherboard to buy a micro-ATX one with more PCIe slots is the most comfortable option.
 

rdfreak

Dabbler
Joined
May 22, 2019
Messages
12
Yeah, I did come across that x16 to x8 + two M.2 as I opened @HoneyBadger 's link, it is indeed perfect. I was so close to just order it right away, but I think this will have to wait for next month's salary :D

Also, @HoneyBadger I should clarify, the SLOG should be useful, it's intended for the main pool of SAS HDDs. I thought I'd partition the two M.2 SSDs so that two small partitions on each drive act as a mirror vdev for an SLOG cache, and the remaining two partitions could be mirrored as well, albeit for a different pool - git or whatever. It's just that whatever enterprise M.2 SSDs with data loss protection I find tend to be like 960 GB in size, and that's waaay too much for SLOG. I just dont want to waste the space of SSDs as expensive as that. Feel free to comment on whether I'm shooting myself in the foot with these partitioning schemes, but from what I've read, it should be ok.

(Also please disregard the possible double-post, somehow my incomplete response got submitted before I finished it; I hope a mod removes it)
 

DigitalMinimalist

Contributor
Joined
Jul 24, 2022
Messages
162
Go do special vDev for metadata & small files with the NVME drives instead of SLOG.

I just ordered 2x Samsung 960GB PM983, which are PCIe 3.0 4x - will run in Mirror config and is has PLP and good TBW of 1.4PB.

I bought used from eBay for 85€ per drive
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
PM983 unfortunately won't work in that linked adapter as they're 22110 (too long)

Also, @HoneyBadger I should clarify, the SLOG should be useful, it's intended for the main pool of SAS HDDs.
Understood, but are you doing git commits over NFS or another similar protocol that uses sync writes? If not, then your SLOG will never be used.

Async writes will already be faster than any sync write can go - you'd want to add the SLOG only if you require that data safety and would be bottlenecked by the synchronous write speed of (presumably) spinning HDDs.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Feel free to comment on whether I'm shooting myself in the foot with these partitioning schemes, but from what I've read, it should be ok.
Using partitions rather than all drives is NOT a supported configuration, and will put yourself in trouble if you ever have to replace a drive—potentially in BIG trouble if you were to use these partitions for pool-critical purposes such as "special vdev".

No sync writes = No SLOG. Do you really use, and need, sync writes?
 

rdfreak

Dabbler
Joined
May 22, 2019
Messages
12
Well, I do intend to use NFS for network sharing, and I've found myself quite often annoyed that transferring small files over my current network setup takes much longer than I have the patience for. Big files, I can fire and forget, those are usually backups of blu-rays or CDs anyway, and I can keep myself occupied with something else in the meantime. But working with small files happens when I have a task that's the primary focus of my attention, which is why one of my goals for the NAS I'm building is to handle multiple small writes over NFS as quickly as possible. I mean, that's why I'm focused on paying extra for enterprise-grade SSDs, much to the chagrin of my wallet

@Etorix, can you elaborate why I would be in big trouble, if a drive with a pool-critical partition fails? The way I see it, there will be a redundant drive with the critical partitions anyway. Plus, rebuilding all mirrors/partitions shouldn't be that big of a concern, since these will be NVMe drives, and it would probably take minutes in the worst case scenario to copy 960 GB to one of them.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
NFS does generate a sync-write workload, and you'll definitely feel the pain on smaller files/transfer sizes if that's kept on.

I see a few options here:

1. Use a couple small Optane or other dedicated SLOG devices. This will accelerate your sync-write workload to the speed of the SLOG device - the 16GB ones bottleneck around 140MB/s, the 32GB are good for around double that. Optane M10's are cheap (well, relatively) will speed up your small-filesize writes, and installation and replacement of devices is supported through the GUI. Disadvantage is that it won't help your reads, but if the files are served from ARC that's moot.

2. Use larger SSDs, and manually partition them to do double-duty as SLOG and a small separate pool. You'll get the separate pool you want, but you're now sharing the SSDs between SLOG (a 100% write workload) and another mixed R/W workload. Optane can handle this just fine, but other SSDs tend to experience a "bathtub curve" effect under a mixed I/O load - they do great at a 100% read or 100% write, but put them in the middle at 50/50 or 70/30 either way and you'll get significantly less performance. You're also at risk because the GUI/middleware won't consider the existence of those partitions if you have to replace a drive through that method - basically, it'll be up to you to manually recreate the same partition scheme and manually resilver should a failure occur.

3. Set up a separate dataset for the small files, and if you don't actually need the sync writes, disable them or mount the export asynchronously. Sync writes are generally intended for remote operations that can't be replayed - think a virtual machine image, database action, a modify-in-place - something where you can't just hit "retry" on the copy job or git commit (and we're all using version control anyways, right?) and send the files again. This gives you the absolute fastest speed you can get (because your "write buffer" is just RAM) but introduces the risk of "if you copy a file and then the power immediately goes out, you might need to copy it again."

Personally in a scenario like yours, I'd use Option 3, but that's me accepting the risk/reward tradeoff.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Welcome.

What you're looking for is done quite often in the rackmount world - your power words here for searching are probably "PCIe riser" - and your board supports heterogenous bifurcation (splitting a slot into the unequal x8x4x4 config) so you do have the motherboard for it, but I'm assuming this is in a desktop or tower style case, so physically mounting the cards after they've been split out (or the riser itself) is likely going to be your challenge.

A quick search for something more "universal" brought me to this "unique" device:

View attachment 58129 View attachment 58128

I've never tested it, but allegedly it will split a single x16 slot into an x8 (side) x4 (top, mechanically still PCIe x4) and x4 in M.2 2280 format.

Found the seller through their [H]ardOCP profile - I have no affiliation to them and haven't bought any products.


I can appreciate the mad-scientist engineering here certainly, but it still scares me a bit and doesn't (directly) solve your mounting issue. You'd have to do some creative 3D printing/metalworking/etc to brace it in a desktop/tower, most likely.

Edit: The slot on the top can use a low-profile card directly, and mount on the regular bracket screw, so that would add stability there. But you'd need to use a flexible cable like one of the ADT-TECH cables referenced on the vendor site or similar to get the LSI HBA working off the side port ... and then you're running your HBA through bifurcation off a flexible cable.

Of course, if you're in a rackmount case, you might be able to find a regular angled riser that could do the trick; although most of them are designed with a specific board in mind, and I can't find that your board has matching ones (usually one isn't tucking an ITX board into a rackmount case)

Circling back to the root issue of wanting to mount a couple NVMe SSDs - are you sure your workload would benefit from an SLOG? A git commit shouldn't necessarily need to be synchronous, but for many small files I can understand the desire to go all-flash. Would SATA/SAS SSD be viable for that?
This is exactly the sort of crazy stuff that can turn a Supermicro X10SDV board into quite a versatile thing... You're giving me all sorts of terrible ideas!
 
Top