HBA Card Firmware

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
I fully believe that it is to protect the lucrative SAS business at all costs. PCIe switches have long been widespread, just not high-performance ones for many lanes. A lot of manufacturers could cash in on that and eat into Broadcom's margins.
SAS meanwhile, has two players: Broadcom and Microchip. The latter probably a step behind in terms of SATA/SAS gear, with LSI/Avago/Broadcom sweeping up two of the big three OEMs (HPE uses both LSI and what is now Microchip, Dell and Lenovo used LSI pretty much exclusively until recently) and much of the whitebox market. That's a lot of servers.

Let's examine a typical server you may have bought circa 2015, a Dell R630 10-bay. That thing needs one HBA (customized for Dell) and one expander (integrated with the backplane). Retail pricing for a typical RAID controller plus the expander might have been somewhere in the 500-600 buck range, say a third of that is what Broadcom charged Dell for the parts. That's a pretty juicy piece of the action, and there was little that could be done to get away (maybe use fewer disks, and SATA only, but neither of those options were great for a variety of reasons). This R630 had four U.2 tri-mode bays, which was cutting edge at the time and not something you would use willy-nilly.

Fast forward a few years, and in early 2020 you could buy a Dell R6515, same 10-bay form factor. It's still sold with an HBA for SAS, but here's the kicker: Only 8 SATA/SAS disks are supported. They cut the expander. Two bays only do NVMe. Instant savings for everyone and instant sad face for Broadcom execs. And you can even buy it without SAS at all, which is even worse. This is possible because U.2 takes the "we'll tack it onto the side" approach that SAS took with SATA - it's entirely backward-compatible. Just like you could hook up a second SAS port, you can now hook up PCIe and it's independent of the SAS/SATA pins.

Clearly, this scenario was bad for Broadcom and Microchip, so they came up with U.3. Ostensibly, it allows for cheaper backplane PCBs by virtue of needing fewer high-speed differential pairs (by re-using the four existing ones that support the two SAS lanes), more or less halving the count of differential pairs per bay (the clock situation may vary a bit, so the exact count depends on the product). The catch is that the only way it can work is with an expensive tri-mode expander - you could throw away three of four ports per SATA/SAS device by connecting a controller directly to a bay, but then Broadcom can sell you a monster -32i card just so you can have 8 tri-mode bays. It also means server vendors can't later decide to cut out SAS and still offer SATA from the PCH or SoC, like they can with U.2 - they're stuck with SAS if they want to keep tri-mode U.3 bays.

tl;dr:
  1. Tri-mode expanders are more expensive than simple PCIe switches;
  2. U.3, tri-mode HBAs and tri-mode expanders reinforce each other, they make little sense without the other two
  3. For Broadcom, SAS is a "money printer go BRRRRR" product line, with likely high margins and widespread adoption
Thank you for the explanation. That all makes sense :)

Could you please provide your input on the last two questions i asked?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Now that i have got this card and as you have clarified the fact that it should be okay with the SAS/SATA device, i would like to know that in the future if i have to build another NAS, which HBA Card do you guys suggest? 9305 or 9400?
You mean this? Whatever's cheapest or most convenient connector-wise. Don't expect meaningful differences outside of weird edge cases.
 

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
You mean this? Whatever's cheapest or most convenient connector-wise. Don't expect meaningful differences outside of weird edge cases.
Perfect. Thanks a lot man!

I would like to know how much difference is between the IOPS/Speed/Throughput and latency for the same SSD, same hardware but with one key difference which is one hand, it uses the TriMode Adapter, and on the other hand, it uses the PCIe Switch based Card. Would the difference be so much that it could be noticeable?

Secondly, would it make sense to use the TriMode Adapter with the NVMe firmware for connecting only NVMe devices like U.2?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I would like to know how much difference is between the IOPS/Speed/Throughput and latency for the same SSD, same hardware but with one key difference which is one hand, it uses the TriMode Adapter, and on the other hand, it uses the PCIe Switch based Card. Would the difference be so much that it could be noticeable?
I haven't seen real benchmark numbers. Expect IOPS and latency to drop precipitously. Would it be noticeable? It depends on the use case, I can definitely imagine scenarios either way. Part of the promise of the 9600-series is lower latency throughout the stack, so it was definitely noticed by at least some customers.

Bottom line is that I'm not paying to find out how much crappier it is.

As for a pure NVMe operation, the answer is "hell no". More power, higher price tag, worse performance... What would be the benefit?
 

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
I haven't seen real benchmark numbers. Expect IOPS and latency to drop precipitously. Would it be noticeable? It depends on the use case, I can definitely imagine scenarios either way. Part of the promise of the 9600-series is lower latency throughout the stack, so it was definitely noticed by at least some customers.

Bottom line is that I'm not paying to find out how much crappier it is.
Got it, got it.

Bottom line is that I'm not paying to find out how much crappier it is.
HAHAHA

As for a pure NVMe operation, the answer is "hell no". More power, higher price tag, worse performance... What would be the benefit?
So, a PCIe switch is the workaround, right? The BCM king has the only benefit ;)

BTW, should i be good with the SATA SSDs such as Intel D3-4610 with these TriMode HBAs? Would there be any impact in terms of performance?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
No, they're basically the same as Intel PCH SATA, as far as performance goes. SATA just hasn't been very challenging for a decade now.
 

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
No, they're basically the same as Intel PCH SATA, as far as performance goes. SATA just hasn't been very challenging for a decade now.
Sounds good! So, the NVMe is the concern with these TriMode Adapters?

And would be there any issue if i connect few SATA SSD/HDD from the HBA and few from the Intel PCH bus?
 
Last edited:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
None at all
Bingo!

For the Storage vdevs, what kind of SSDs do you recommend in terms of SATA? TLC/QLC/SLC/MLC and with DRAM cache or cacheless?

Is there any good model you want to suggest with some good endurance and high IOPS?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
For the Storage vdevs, what kind of SSDs do you recommend in terms of SATA?
None, the SATA SSD market is down to bottom-feeding trash and ludicrously-expensive enterprisey things.
 

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
None, the SATA SSD market is down to bottom-feeding trash and ludicrously-expensive enterprisey things.
Hmm, i see. After with the introduction of the NVMe Gen4, that's the condition.

So, you recommend NVMe then? If so, U.2 or M.2?

And TLC/QLC/SLC/MLC and with DRAM cache or cacheless?
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
DRAM cache may not make a practical difference for NAS use.
SLC, if available, is going to be ludicrously expensive.
QLC may have endurance issues and/or fall into "bottom-feeding trash".
So I think we're down to MLC/TLC, whatever comes at acceptable price. Most likely TLC.
For my own flash pool, I have accumulated second-hand enterprise SATA SSDs (a mix of Micron 5100 Pro, Samsung PM863 and Intel DC S4500, all in 3.84 TB).

Ask for health reports before buying. And don't overthink it. :wink:
A SATA SSD pool has no long-term upgrade path: Newer, higher capacity, drives are NVMe.
 

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
DRAM cache may not make a practical difference for NAS use.
SLC, if available, is going to be ludicrously expensive.
QLC may have endurance issues and/or fall into "bottom-feeding trash".
So I think we're down to MLC/TLC, whatever comes at acceptable price. Most likely TLC.
For my own flash pool, I have accumulated second-hand enterprise SATA SSDs (a mix of Micron 5100 Pro, Samsung PM863 and Intel DC S4500, all in 3.84 TB).

Ask for health reports before buying. And don't overthink it. :wink:
A SATA SSD pool has no long-term upgrade path: Newer, higher capacity, drives are NVMe.
Thanks for your input.

I'm planning a flash based NAS so what do you guys recommend for the NVMe? U.2 or normal consumer grade M.2?
 

nabsltd

Contributor
Joined
Jul 1, 2022
Messages
133
So, you recommend NVMe then? If so, U.2 or M.2?
U.2 Enterprise drives are the best value. You can find them used for decent prices, and most have anywhere from high (3 DWPD) to insane (10 or more DWPD) endurance, so even at 75% health, they have a lot of life left. In addition, because they are larger than M.2, they can hold a lot more flash. 30TB or so is the top end, with 6.4TB easy to find.

Enterprise M.2 drives are insanely expensive, because every single new motherboard made today has a slot that supports them, and 8TB is the most storage you'll find on one.

Unfortunately, if you have 2U chassis that doesn't specifically have U.2 bays, you'll have to find someplace to put the disks (and use cables, which adds a point of failure and possible noise to the signal), as a U.2 add-in card is too tall to fit. For a 3U (or better 4U), you have a variety of options of add-in cards.

But, I'd only use NVMe of any kind as transient storage (cache, SLOG, etc.). If you want to have an all-flash pool, enterprise SATA SSDs are the way to go.
 

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
U.2 Enterprise drives are the best value. You can find them used for decent prices, and most have anywhere from high (3 DWPD) to insane (10 or more DWPD) endurance, so even at 75% health, they have a lot of life left. In addition, because they are larger than M.2, they can hold a lot more flash. 30TB or so is the top end, with 6.4TB easy to find.
Very good point and actually i was looking for one. Any recommendations for a particular model i should look for?

Enterprise M.2 drives are insanely expensive, because every single new motherboard made today has a slot that supports them, and 8TB is the most storage you'll find on one.
Indeed!

Unfortunately, if you have 2U chassis that doesn't specifically have U.2 bays, you'll have to find someplace to put the disks (and use cables, which adds a point of failure and possible noise to the signal), as a U.2 add-in card is too tall to fit. For a 3U (or better 4U), you have a variety of options of add-in cards.
Unfortunately, i have a Tower Chassis. I have plan for rack chassis but not anytime soon. I have got a couple of questions though:

What do you mean by point of failure and possible noise to the signal?

Which U.2 Card do you recommend for 4/8/16 U.2 disks?

Should i go for U.2 or U.3? Of course, I'm aware that U.3 is expensive but just curious.

Also, for the U.2 NVMes, do you recommend PCIe based switches such as PLX or the new TriMode Adapter can work?

But, I'd only use NVMe of any kind as transient storage (cache, SLOG, etc.). If you want to have an all-flash pool, enterprise SATA SSDs are the way to go.
Got it. Wanted to know if metadata is beneficial for a SATA SSD and/or NVMe pool.

And what recommendations for a SATA 2TB/4TB SSD?
 

nabsltd

Contributor
Joined
Jul 1, 2022
Messages
133
I have a variety of U.2 drives, and Intel seem to run coolest even at full load without any loss of speed (throttling, etc.).

A "tower" chassis is generally between 3U and 4U, but if you can put in full-height cards and there is some room above the top of the slot, it's enough for any U.2 add-in card. I have personally tested both this 2x (which requires bifurcation on the motherboard) and this 1x.

Unlike SATA or SAS, the cable to a U.2 drive is an extension of your PCIe bus. Timing and noise can be an issue.

There is no real value in having more than a couple of U.2 drives...you don't need that speed for the main pool. Any card that handles more than 4x U.2 disks has to have a PEX and some cabling system, and then you have to match the bay for the U.2 to the cable system. This is a PITA unless you buy a server that has it all set up already (chassis, motherboard, PEX card, cables, etc.).

Any SATA SSD with 3 DWPD will do fine, but some people like to get models that you can get easy firmware upgrades. For that, Intel (now Solidigm) is good, but there are others. Spend some time at ServeTheHome and you'll see a lot more about the details of such hardware.
 

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
I have a variety of U.2 drives, and Intel seem to run coolest even at full load without any loss of speed (throttling, etc.).
Wow. This was new to me!

A "tower" chassis is generally between 3U and 4U, but if you can put in full-height cards and there is some room above the top of the slot, it's enough for any U.2 add-in card. I have personally tested both this 2x (which requires bifurcation on the motherboard) and this 1x.
Thank you for the link!

What do you think about these two?
10Gtek S94N2X8
10Gtek S94N416

Are they good enough and would handle the drive weight and deliver full speed?

Unlike SATA or SAS, the cable to a U.2 drive is an extension of your PCIe bus. Timing and noise can be an issue.
Timing i understand but what do you mean by noise here? Sorry, I'm so much new to the U.2 thingy.

There is no real value in having more than a couple of U.2 drives...you don't need that speed for the main pool.
Not even for a 100GbE network setup? BTW, what kind of storage do you recommend for a 40GbE/100GbE?

Any card that handles more than 4x U.2 disks has to have a PEX and some cabling system, and then you have to match the bay for the U.2 to the cable system.
Gotcha!

This is a PITA unless you buy a server that has it all set up already (chassis, motherboard, PEX card, cables, etc.).
Yes, dear friend. I'm already sad about this but hopefully in the future for sure.

Any SATA SSD with 3 DWPD will do fine, but some people like to get models that you can get easy firmware upgrades. For that, Intel (now Solidigm) is good, but there are others. Spend some time at ServeTheHome and you'll see a lot more about the details of such hardware.
Thank you for your suggestions. Really appreciate that.

Getting a couple of cheap disks from my friend who is retiring his old hardware. He has D3-4510/D3-4610 and SanDisk X400 and WD SA510. Out of these, the SanDisk X400 has the highest Random Read IOPS and then the WD SA510 has the highest Random Write IOPS. I'm not sure about the latency for these drives though. Which one out of these do you recommend? As these are my options for now.

Lastly, wanted to know if metadata is beneficial for a SATA SSD and/or NVMe pool.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Mind that Intel at least in my experience is prone to rather "odd" firmware issues. While the drives are not in any way prone to lose data etc. I had a whole set of SSDs where only some of them would report data errors via SMART counting from zero up but some instead counted from 2^32 down. Which easily freaks out monitoring systems and their operators (me).

For my particular set of drives some counted correctly and some didn't although all were running the same firmware version (0170). Possibly the bug depended on the version the drives were initially powered on with. Weird.

After an update to 0184 all report 0 errors and I expect them to count up from now on.

Short version: for Intel drives always make sure to run the latest firmware. The SSD division has been spinned off (or sold?) as "Solidigm", so that's where you will find the update tools now.

HTH,
Patrick
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Timing i understand but what do you mean by noise here? Sorry, I'm so much new to the U.2 thingy.
Quality of the cables, connectors, backplanes - noise in the electrical signals caused by low quality "something".

BTW, given the current state of hardware offers I would never try to build an U.2 or U.3 capable system from scratch but start with a Supermicro barebone with slots on front, backplane, connectors to mainboard ... all set and done. Just plug in your SSDs.
 

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
Mind that Intel at least in my experience is prone to rather "odd" firmware issues.
Would like to add that you're very true here. My friend gave me some D3-4510 to test how much the speed i can get in my pool. Those were 6 960GB each and when i tested in RAW, no TrueNAS, it had less speeds so i went and downloaded the latest firmware. Guess what? After the power cycle, all the drives health went to 90%. 10% drop on every SSD. One SSD had like 80% health and that went down to 70%. Not sure whether it was because of the OEM or what. The drives were updated on internal 6Gb/s onboard SATA ports.

While the drives are not in any way prone to lose data etc. I had a whole set of SSDs where only some of them would report data errors via SMART counting from zero up but some instead counted from 2^32 down. Which easily freaks out monitoring systems and their operators (me).
Dang shit. That is SOOO MUCH SCARY. May i know what model was it? Also OEM or retail? And did you found any workaround for it?

For my particular set of drives some counted correctly and some didn't although all were running the same firmware version (0170). Possibly the bug depended on the version the drives were initially powered on with. Weird.
Holy cow. Somewhat same happened to my drives which i mentioned above ;(

After an update to 0184 all report 0 errors and I expect them to count up from now on.
Interesting. I'll have to check the system regarding the both firmware versions.

Short version: for Intel drives always make sure to run the latest firmware. The SSD division has been spinned off (or sold?) as "Solidigm", so that's where you will find the update tools now.
Yes, and the NUC to ASUS right?
 
Top