Using LSI 2308 (PCIE 3.0 x8) in PCIE 3.0 x1 mode, what will I lose?

devemia

Dabbler
Joined
Mar 5, 2023
Messages
20
I recently got an LSI 2308, and have not had a chance to test it yet. My motherboard only has one PCIE 3.0 x16 slot (no bifurcation), and it will be occupied by an Intel x710-DA2. Other PCIE slots are x16 physically, but they all operate as 3.0 x1 electrically. With that said, if I put my HBA to the 3.0 x1 slot, what is the implication? The PCIE 3.0 x1 is only 8 Gbps, so there will be performance limitations, but to what extent (e.g., HDD vs SSD, seq vs rand4k, parity check, rebuild speed, etc.)?

And a side question, does anyone know what the slot on this IBM m5110 is for? From a quick lookup, it is for a daughter board, a flash cache module of some sort, but I'm not certain about its functionalities and use cases.
1679210986972.png
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Obviously, you'll lose bandwidth, and will be constrained by that that is you have more than a handful of HDDs (don't even think of using it for SSDs—plural).
The slot for daughter cache board is a physical sign that your M5110 is not a HBA but a RAID controller. It may, or may not, be possible to lobotomise it into a HBA.

Essentially, you're in need of another motherboard with several PCIe slots that all operate as x4 or wider, and of a proper HBA.
 

devemia

Dabbler
Joined
Mar 5, 2023
Messages
20
Obviously, you'll lose bandwidth, and will be constrained by that that is you have more than a handful of HDDs (don't even think of using it for SSDs—plural).
Unfortunately, all mATX under the $200 range I know of does not have enough lanes. Most are x16x1x1x1 and a few are x16x4x1x1. Bifurcation is another choice, there is no mATX case with 5 PCIE slots. It would be great if you know can suggest some valuable MB, used is fine. That aside, assuming.

HDD (~250 MB/s) - Does this mean I can run up to 4-wide vdev without significantly losing bandwidth?
SSD (~500 MB/s) - This does not really make sense due to the bottleneck, but let's say I have a "slower SSD pool" and does not care about sequential performance, does this bottleneck affect other aspects of the pool (e.g., IOPS, rand4k, etc.)?

The slot for daughter cache board is a physical sign that your M5110 is not a HBA but a RAID controller. It may, or may not, be possible to lobotomise it into a HBA.

Essentially, you're in need of another motherboard with several PCIe slots that all operate as x4 or wider, and of a proper HBA.
I read over that and also watched the Art Of Server videos before purchasing one. This card should use LSI 2308. From what I know, this card is in HBD/IT mode already, but I have not tested it yet. At worst, I will return the thing.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
You need to look at server motherboards. For Intel that would be motherboards with C236 (Supermicro X11SS_), C246 (X11SC_), C256 (X12ST_) chipsets, or possibly W480/W580 (X12SC_) or even W680 (X13SA_, but this is still toon new to recommend). Other manufacturers are fine, but Supermicro has the most comprehensive offer.
For AMD that would be AsRockRack X470D4U, X570D4U, B550D4U, B650D4U boards, and there's no real alternative for server AM4/AM5 boards.

HDD (~250 MB/s) - Does this mean I can run up to 4-wide vdev without significantly losing bandwidth?
Possibly. The question is then: Why use a HBA for a handful of drives when these could be attached directly to the motherboard?
SSD (~500 MB/s) - This does not really make sense due to the bottleneck, but let's say I have a "slower SSD pool" and does not care about sequential performance, does this bottleneck affect other aspects of the pool (e.g., IOPS, rand4k, etc.)?
I'd suspect that the bottleneck has consequences for IOPS too, but I struggle to make sense of "a slower SSD pool".
 

devemia

Dabbler
Joined
Mar 5, 2023
Messages
20
Possibly. The question is then: Why use a HBA for a handful of drives when these could be attached directly to the motherboard?
I already used up 8 SATA slots on the MB.
I'd suspect that the bottleneck has consequences for IOPS too, but I struggle to make sense of "a slower SSD pool".
I also think it makes little sense unless someone needs certain aspects of SSD that are not limited by this bottleneck. Hence the question, as I'm interested in learning more about these behaviors.
____
Update: Interestingly, I just updated BIOS and now see the bifurcation option for my x16 slot (x8x8 and x8x4x4, MSI PRO b550m-VC WIFI, bios H60). Not sure if I missed this option before, or if this is a new feature. Now, I need to somehow modify my case so it fits more PCIE cards.
 
Last edited:

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Not knowing your hardware and existing pool layout I could not guess…

So you'd need a bifurcating riser and two extension cables for the two cards, or rather one x16 extension cable and suspend the riser in mid-air below the two half-height cards. (No endorsement of any seller/item, just examples.)

Upgrading to one the AsRockRack boards (some of which even have 10 GbE on-board) would likely cost more than $200, but keep an eye on the total costs.
 

devemia

Dabbler
Joined
Mar 5, 2023
Messages
20
Not knowing your hardware and existing pool layout I could not guess…
I just update my signature with the current build. This is mainly for hobby purposes to have more hands-on experience with TrueNAS, not for critical workload by all means.

So you'd need a bifurcating riser and two extension cables for the two cards, or rather one x16 extension cable and suspend the riser in mid-air below the two half-height cards. (No endorsement of any seller/item, just examples.)
Thanks for the suggestion, I will look into them. No need to worry about sharing seller information with me, I ask people the same thing all the time. I'm responsible for my purchase decision after all. And what do you mean when saying "...and suspend the riser in mid-air below the two half-height cards."?

Upgrading to one the AsRockRack boards (some of which even have 10 GbE on-board) would likely cost more than $200, but keep an eye on the total costs.
That was what I look for at the beginning, but they were like $400+ at the time (and still are on eBay, Amazon, etc.) The total cost for my build at the moment is lower than that. If you know a good source for a lower price, I would love to learn about that. For older generation server boards, I will dive into them as time goes on though.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
I just update my signature with the current build. This is mainly for hobby purposes to have more hands-on experience with TrueNAS, not for critical workload by all means.
For a hobby, you've gone quite far into the rabbit hole… Your motherboard has only 2 M.2 slots, so these two special vdevs are made of partitions on the same two Optane P1600X, right?
Personal data, say family photos/videos, can be regarded as highly critical in the sense that one would HATE to lose it.

And what do you mean when saying "...and suspend the riser in mid-air below the two half-height cards."?
Assuming that the case has full height slots and that the NIC is half-height, both NIC and HBA cards could be converted to half-height brackets, plugged in the riser and screwed to the case slots. The riser would end up midway between the motherboard and the side/top panel ("below" pictures it with a horizontal motherboard and a top panel), and it would only take a single short x16 extension cable to connect the riser to the x16 PCIe slot. But one may want to secure the riser in place.
 

devemia

Dabbler
Joined
Mar 5, 2023
Messages
20
For a hobby, you've gone quite far into the rabbit hole… Your motherboard has only 2 M.2 slots, so these two special vdevs are made of partitions on the same two Optane P1600X, right?
Personal data, say family photos/videos, can be regarded as highly critical in the sense that one would HATE to lose it.
I have been following home server in general, and TrueNAS in particular for years, it's just now that I decide to burn some cash and jump to this hole... Yes, my MB has only 2 M.2 slots, and I use the Optane there for dedup vdev on the SSD pool. Meanwhile, I use two PCIE to M.2 adapters like these in my x1 slot for two other Optane (metadata vdev). As metadata is more in IOPS and random performance, I expect it will not exceed the 3.0x1 speed (~985MB/s).

I have several backups for my important data, both on-site and off-site in an easily recoverable manner so that should be good though.

Assuming that the case has full height slots and that the NIC is half-height, both NIC and HBA cards could be converted to half-height brackets, plugged in the riser and screwed to the case slots. The riser would end up midway between the motherboard and the side/top panel ("below" pictures it with a horizontal motherboard and a top panel), and it would only take a single short x16 extension cable to connect the riser to the x16 PCIe slot. But one may want to secure the riser in place.
The PCIE bifurcation card + raiser cable are too expensive for my taste ($100+). Still under the range of ASRockRack though, but that price still tastes sour considering it is just an adapter card and a raiser cable. Besides, I wonder why there aren't many options out there. NGL, mATX is rather cursed when going to an extreme like this. :eek:
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Yes, my MB has only 2 M.2 slots, and I use the Optane there for dedup vdev on the SSD pool. Meanwhile, I use two PCIE to M.2 adapters like these in my x1 slot for two other Optane (metadata vdev). As metadata is more in IOPS and random performance, I expect it will not exceed the 3.0x1 speed (~985MB/s).
Thanks for the explanation. You have a safer and better setup than I thought.
This begs the question why you refrained from taking one more step towards following the playbook: ECC memory.

The PCIE bifurcation card + raiser cable are too expensive for my taste ($100+). Still under the range of ASRockRack though, but that price still tastes sour considering it is just an adapter card and a raiser cable.
And I provided examples with the cheap Chinese stuff, not the higher quality ADT-Link extenders or the custom bifurcating risers from Christoph Payne. (Both vendors which I can recommend, by the way.)
Little accessories always come with big price tags… This is the price to pay to adapt a gamer-style motherboard (one x16 slot for GPU and lots of x1 for… I don't know what) to server-style use (requiring at least electrical x4 for anything, preferably in x8-x16 mechanical slots or open slots).
Besides, I wonder why there aren't many options out there. NGL, mATX is rather cursed when going to an extreme like this. :eek:
It seems that the customer market is strongly segmenting into either ATX-or-larger one one side or mini-ITX (with ever more stylish and expensive small, or not-so-small, cases) on another side. In the middle, micro-ATX is just being abandoned in the customer/gamer market and increasingly reserved for corporate desktops and servers.
Even with ATX motherboards, bifurcated slots from the CPU (x16/x8 + x0/x8) are getting rare in the later generation. It may be an obsession with feeding x16 to the GPU, or the extra cost of routing a second set of PCIe 5.0 traces to a second slot.
 

devemia

Dabbler
Joined
Mar 5, 2023
Messages
20
This begs the question why you refrained from taking one more step towards following the playbook: ECC memory.
I'm not very interested in using ECC for this build (also more $$$), and I already had a 64GB pair in storage. Not that I ever got any bit flip or noticeable data corruption situation. Don't get me wrong though, when I set up storage solutions for my company (Synology NAS btw), I use ECC, stress test components, and whatnot. That's the environment I never want to have a single hair of risk. For my home build though, I simply want to try out how far I can go in terms of possible hardware and configurations (on a budget).

For example, my case is Silverstone SG11 which officially supports 9 SSD + 3 HDD. However, I can fit up to 23 SSD (?) + 4 HDDs with some tricky but secure installation (no zip tie or things like that). All in a 22L case. That's when the fun comes in, or else, I could just get an AsRock x570 Taichi, ATX board, and a bigger case, that fits all my needs. Hopefully, this explains my approach.
Little accessories always come with big price tags… This is the price to pay to adapt a gamer-style motherboard (one x16 slot for GPU and lots of x1 for… I don't know what) to server-style use (requiring at least electrical x4 for anything, preferably in x8-x16 mechanical slots or open slots).
That brings the total cost very close to the price of X570 d4u 2l2t. This makes me wonder if I should return my board and get that one... With two 10 Gbe ports, I can also return my x710-DA2 (not arrived, and not even sure if they are genuine).
 
Top