Drive Replacement

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
…which is pretty much the information you could not find. The open question is whether I'm lacking in paedagogic skills and/or whether you're lacking in listening skills.
Yes, that lanes thing was not clear to me. But its all good now. I don't think you lack in whatever skills you have :) Thanks to you too as well :)

If you're not using the convenience of a backplane, it has been suggested quite a few times up-thread to either connect the drives to PCIe slots, using bifurcation and suitable adapters, or to use an add-in card with a PLX switch. (The backplane relies itself on bifurcation.)
There was also a link to actual tests showing the consequences of using a Tri-Mode HBA vs. directly attached drives: Namely, the HBA works fine until it doesn't, latency goes up badly and total throughput quickly hits a hard limit from there, while directly attached drives keep on scaling up, reaching a distinctly higher maximal throughput than the HBA, at a distinctly lower latency.
I can see the difference now. Thank you again for brining it to my attention. So, a high point 1120 or 1180 would work fine here instead of the Tri-Mode HBA I guess?

Also, am I good with the SATA SSD on the HBA or does that too require some kind of dedicated AIC?
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
SATA is SATA: It connects to the chipset or to a SAS HBA. For the latter you want a 9300 (PCIe 3.0) for SSDs but even a 9200 would do for HDDs; anything above the 9300 series is overspending for SATA.

It's somewhat annoying that Highpoint describes the 1120 and 1180 as "HBA" while these are PLX switches, handle NVMe only and could not be used with SAS/SATA drives, but yes these are the better alternative to a Tri-mode for U.2 drives. Pay attention to lanes though: The 1120 (PLX8747) takes 16 lanes from a x16 slot and distributes these as 4*4; Xeon Scalable boards usually can bifurcate x16 slots all the way to x4x4x4x4, so the only benefit of a PLX over a simple adapter (example) would be to offload bifurcation. You'd need two adapters, or two 1120, in two x16 slots to handle 8 U.2 drives. (Two 1120 in two x8 electrical slots would work, with a reasonable 2:1 oversubscription.)
The 1180 (PLX8749) takes 16 lanes from a x16 slot and serves 8*4 lanes to 8 U.2— 2:1 oversubscription. A single 1180 in a x16 slot can handle your 8 U.2.
 
Last edited:

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Probably not - these controllers are not on the supported hardware list for FreeBSD 13.2.
Could this be because these are not "controllers" or "HBAs"? The 1180 datasheet claims compatibility from FreeBSD 12.1, but one has to look at this picture to understand what that card is (PEX = PLX):
SSD6540_Overview_Image2.png

The PLX8749 is not designated anywhere as searchable text. The top of the page displays the old OS X "face" logo (first in the row, before Windows and Tux), but macOS is not listed among supported OS (FreeBSD is there), and "System Requirements" for macOS is "n/a". I would expect that these cards work out-of-the-box in a MacPro with a x16 PCIe slot. Poor documentation in any case.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Thanks! :smile:
 

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
SATA is SATA: It connects to the chipset or to a SAS HBA. For the latter you want a 9300 (PCIe 3.0) for SSDs but even a 9200 would do for HDDs; anything above the 9300 series is overspending for SATA.
Gotcha

t's somewhat annoying that Highpoint describes the 1120 and 1180 as "HBA" while these are PLX switches, handle NVMe only and could not be used with SAS/SATA drives, but yes these are the better alternative to a Tri-mode for U.2 drives. Pay attention to lanes though: The 1120 (PLX8747) takes 16 lanes from a x16 slot and distributes these as 4*4; Xeon Scalable boards usually can bifurcate x16 slots all the way to x4x4x4x4, so the only benefit of a PLX over a simple adapter (example) would be to offload bifurcation. You'd need two adapters, or two 1120, in two x16 slots to handle 8 U.2 drives. (Two 1120 in two x8 electrical slots would work, with a reasonable 2:1 oversubscription.)
The 1180 (PLX8749) takes 16 lanes from a x16 slot and serves 8*4 lanes to 8 U.2— 2:1 oversubscription. A single 1180 in a x16 slot can handle your 8 U.2.
Hmm. So, if i have to choose between the HighPoint 1180 and PLX which one would you recommend?
 

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
Could this be because these are not "controllers" or "HBAs"? The 1180 datasheet claims compatibility from FreeBSD 12.1, but one has to look at this picture to understand what that card is (PEX = PLX):
SSD6540_Overview_Image2.png

The PLX8749 is not designated anywhere as searchable text. The top of the page displays the old OS X "face" logo (first in the row, before Windows and Tux), but macOS is not listed among supported OS (FreeBSD is there), and "System Requirements" for macOS is "n/a". I would expect that these cards work out-of-the-box in a MacPro with a x16 PCIe slot. Poor documentation in any case.
Not my expertise. Maybe @Patrick M. Hausen can give some insights or @danb35
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Etorix just told you these are PCIe switches. I just learned that today, so I thanked him. What was your question again?
 

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
Etorix just told you these are PCIe switches. I just learned that today, so I thanked him. What was your question again?
Yep.
So, if i have to choose between the HighPoint 1180 and PLX which one would you recommend?

I was trying to search some other where i don't have to use cables and directly attached via a PCIe Slot. I found this:

I think this one is better than the HighPoint 1180 and PLX. What are your thoughts?
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
So, if i have to choose between the HighPoint 1180 and PLX which one would you recommend?
The HighPoint 1180 is a PLX card. That's literally what Etorix' post, less than an hour ago, is saying.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
No idea. See the hypervisor system in my signature for what I use with U.2 SSDs.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Yes, but i got confused as Patrick mentioned that it won't support.
@Patrick M. Hausen first took Highpoint marketing blurb at face value and thought these cards were actually HBAs—which they aren't. It's fair enough to be confused, and Highpoint is the culprit here.
"PLX card" is a generic term for anything based around any of these PEX/PLX switch chips (of which there are many models, with different numbers of lanes). I have no first hand experience with Highpoint, but also no first hand experience with the Linksys card I pointed to (the links to AliExpress are meant as examples, not as endorsment of the particular model or vendor). Go with whatever you want, possibly including yet another vendor.

I was trying to search some other where i don't have to use cables and directly attached via a PCIe Slot. I found this:

I think this one is better than the HighPoint 1180 and PLX. What are your thoughts?
This is another version of a bifurcating adapter, so it requires a x16 PCIe slot which can bifurcate x4x4x4x4 (normally not an issue with Xeon Scalable or EPYC). I can see the appeal of not having cables, though it would mean that adding/removing/changing drives would involve switching off the server and then removing the card from its slot.
Do mind, however, the white power connector at the end: The PCIe slot alone may not supply enough power for four U.2 drives.
And also make sure that the drives get some airflow.

Of course, the bifurcating adapter-with-cables and the PLX card both mean that each U.2 drive needs its own power cable in addition to the data cable, which means that the PSU has to provide enough power connectors (18 HDDs + 8 U.2 = 26 drives, that's a lot even for a big 1000+ W modular ATX PSU).
 

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
@Patrick M. Hausen first took Highpoint marketing blurb at face value and thought these cards were actually HBAs—which they aren't. It's fair enough to be confused, and Highpoint is the culprit here.
"PLX card" is a generic term for anything based around any of these PEX/PLX switch chips (of which there are many models, with different numbers of lanes). I have no first hand experience with Highpoint, but also no first hand experience with the Linksys card I pointed to (the links to AliExpress are meant as examples, not as endorsment of the particular model or vendor). Go with whatever you want, possibly including yet another vendor.
Cool. That's what confused me too.

This is another version of a bifurcating adapter, so it requires a x16 PCIe slot which can bifurcate x4x4x4x4 (normally not an issue with Xeon Scalable or EPYC). I can see the appeal of not having cables, though it would mean that adding/removing/changing drives would involve switching off the server and then removing the card from its slot.
Well, the 10Gtek says that is has no Bifurcation support as per the spec. Yes, i understand the complexity involved in the drive replacement.

Do mind, however, the white power connector at the end: The PCIe slot alone may not supply enough power for four U.2 drives.
Yes, yes. I saw them. Connecting the PCIe cable should maintain the adequate power supply right?

And also make sure that the drives get some airflow.
Any idea what's the rough temp during loads? Have never used an U.2 drive before.

Of course, the bifurcating adapter-with-cables and the PLX card both mean that each U.2 drive needs its own power cable in addition to the data cable,
The 10Gtek has an additional power supply option but not the AliExpress one you linked. Maybe i couldn't find it if it has one.

which means that the PSU has to provide enough power connectors (18 HDDs + 8 U.2 = 26 drives, that's a lot even for a big 1000+ W modular ATX PSU).
Would a 1300W Platinum PSU be sufficient?

BTW, wanted to ask one more question. As now i know that SAS3 has 4 lanes per port and each lane is 12Gb/s. So, for utilizing all the lanes on a card like 9400-16i, one needs at least 16 lanes on the CPU? Is that true?
 
Last edited:

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
Maybe I was a bit snarky. Apologies for that.

I still fail to see how 18 disk drives in total can be a hobbyist project including CNC metal work and custom solutions instead of a proper business case and a budget. Everything I have ever seen that uses more than 4 disks has been "business". And designed accordingly.
Cough
Derfinately no business here. I just like shiny toys
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Everything I have ever seen that uses more than 4 disks has been "business"
Heck, I have 30 spinners and 3 SSDs in my NAS, and that's purely personal, home use. It's a Supermicro chassis, though, so I guess that's "business-grade" or something like it.
 

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
Heck, I have 30 spinners and 3 SSDs in my NAS, and that's purely personal, home use. It's a Supermicro chassis, though, so I guess that's "business-grade" or something like it.
Yep.

@danb35 Can you confirm the following?

As now i know that SAS3 has 4 lanes per port and each lane is 12Gb/s. So, for utilizing all the lanes on a card like 9400-16i, one needs at least 16 lanes on the CPU? Is that true?
 

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
Why would you assume that SAS lanes would have any necessary connection with PCIe lanes?
Umm, because you mentioned regarding the PCIe bus throughput. And if a PCIe device is installed in the PCIe slot, wouldn't it use some of the lanes from the CPU? Sorry, but I'm trying to understand what lanes of CPU i would need to get to meet the bandwidth requirements.
 
Top