HBA Card Firmware

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
Neither, it's a SAS host controller IC that can take IT/IR firmware or MegaRAID, and its behavior will change accordingly.
Hmm. So is it safe enough to use it in TrueNAS either with the HDD or SATA SSD pool? Or I'm going to have any kind of performance issue or lack of any function/features?

As for a mystery cache, it could be a reporting bug, host memory via DMA (in the vein of NVMe's HMB), DRAM (which the HBAs do include, PowerPC cores would not be useful without DRAM), or possibly something weird.
I'll provide more info on that.

If you have IT firmware and the card is working, don't worry about the mystery cache
Cool. Do the PCH also include such memory?
 

nabsltd

Contributor
Joined
Jul 1, 2022
Messages
133
Well, i don't think PLP is necessary for data vdevs when it comes to SSD. But for SLOG i guess. Not sure if this is the case with L2ARC and/or Metadata. Can anyone confirm this?
PLP is less necessary if you have redundancy, but it never hurts.

Yeah, yeah. The enterprise ones are always better. They cost a lot more ;)
In general, enterprise drives cost far less if you are building this for home use, since you can buy used fairly safely. One primary reason is endurance, and the other is that actual enterprise users (i.e., companies) often upgrade hardware long before it is no longer useful. The third reason is that most of the enterprise drives purchased are beyond the actual spec that was needed for the workload.

Basically, a drive rated for 3 drive writes per day for 5 years lifetime gets put into a workload where it writes 1 DWPD, and then is replaced after 3 years because it's just not big enough any more, but the rest of the hardware is fine. This means that the drive is at about 80% of its rated life when it is retired. These are the drives you see on eBay when the listing says "237 sold, more than 10 available". This drive is perfect for a home server, as even if it continues to write 1 DWPD, it should last 10 more years.

Even though the drive was replaced because it was "too small", enterprise drives are manufactured with far larger max capacities than consumer drives. For example, if you only have a standard motherboard and want an NVMe, you need M.2, and that tops out at 8TB, and even those are rare. Once you move to enterprise drives, U.2 gives you drives up to 30TB (last I heard...it's probably bigger now). And, you can use 12Gbps SAS SSDs, which are all enterprise drives.
 

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
PLP is less necessary if you have redundancy, but it never hurts.
I thought its only mandatory for SLOG.

In general, enterprise drives cost far less if you are building this for home use, since you can buy used fairly safely. One primary reason is endurance, and the other is that actual enterprise users (i.e., companies) often upgrade hardware long before it is no longer useful. The third reason is that most of the enterprise drives purchased are beyond the actual spec that was needed for the workload.

Basically, a drive rated for 3 drive writes per day for 5 years lifetime gets put into a workload where it writes 1 DWPD, and then is replaced after 3 years because it's just not big enough any more, but the rest of the hardware is fine. This means that the drive is at about 80% of its rated life when it is retired. These are the drives you see on eBay when the listing says "237 sold, more than 10 available". This drive is perfect for a home server, as even if it continues to write 1 DWPD, it should last 10 more years.

Even though the drive was replaced because it was "too small", enterprise drives are manufactured with far larger max capacities than consumer drives. For example, if you only have a standard motherboard and want an NVMe, you need M.2, and that tops out at 8TB, and even those are rare. Once you move to enterprise drives, U.2 gives you drives up to 30TB (last I heard...it's probably bigger now). And, you can use 12Gbps SAS SSDs, which are all enterprise drives.
You're on the point here. I fully agree with you!
 

Fastline

Patron
Joined
Jul 7, 2023
Messages
358
Can someone confirm if SK Hynix PC801 and WD SN810 are safe to use as a data vdev for TrueNAS? These both are OEM drives. The WD one has low TBW but i'm having a couple of them lying here so wanted to know your thoughts on it.

Secondly, are S94N1X4 and S94N416 from 10Gtek a good U.2 Card and safe to use? I don't have a backplane chassis for now and don't want to go with the TriMode HBA route due to extra cables.
 

nabsltd

Contributor
Joined
Jul 1, 2022
Messages
133
Secondly, are S94N1X4 and S94N416 from 10Gtek a good U.2 Card and safe to use?
As with all things, the best way to know is to test things for yourself. Despite a device working for someone else, it doesn't mean it will work in your config.

Because I spend far, far more on drives than I do on the cards that connect them, I just buy cards and test. I connect drives (usually the max the card can support if it has a fixed limit, like a PCIe to U.2 carrier), and then run badblocks on the drives. I've got about 5 different PCIe to M.2 and a couple of PCIe to U.2 sitting in my box because I bought many different brands at the same time and tested. At PCIe 3.0 speed, I haven't found a single card that had any issues.
 

tdampier

Cadet
Joined
Feb 26, 2024
Messages
8
I fully believe that it is to protect the lucrative SAS business at all costs. PCIe switches have long been widespread, just not high-performance ones for many lanes. A lot of manufacturers could cash in on that and eat into Broadcom's margins.
SAS meanwhile, has two players: Broadcom and Microchip. The latter probably a step behind in terms of SATA/SAS gear, with LSI/Avago/Broadcom sweeping up two of the big three OEMs (HPE uses both LSI and what is now Microchip, Dell and Lenovo used LSI pretty much exclusively until recently) and much of the whitebox market. That's a lot of servers.

Let's examine a typical server you may have bought circa 2015, a Dell R630 10-bay. That thing needs one HBA (customized for Dell) and one expander (integrated with the backplane). Retail pricing for a typical RAID controller plus the expander might have been somewhere in the 500-600 buck range, say a third of that is what Broadcom charged Dell for the parts. That's a pretty juicy piece of the action, and there was little that could be done to get away (maybe use fewer disks, and SATA only, but neither of those options were great for a variety of reasons). This R630 had four U.2 tri-mode bays, which was cutting edge at the time and not something you would use willy-nilly.

Fast forward a few years, and in early 2020 you could buy a Dell R6515, same 10-bay form factor. It's still sold with an HBA for SAS, but here's the kicker: Only 8 SATA/SAS disks are supported. They cut the expander. Two bays only do NVMe. Instant savings for everyone and instant sad face for Broadcom execs. And you can even buy it without SAS at all, which is even worse. This is possible because U.2 takes the "we'll tack it onto the side" approach that SAS took with SATA - it's entirely backward-compatible. Just like you could hook up a second SAS port, you can now hook up PCIe and it's independent of the SAS/SATA pins.

Clearly, this scenario was bad for Broadcom and Microchip, so they came up with U.3. Ostensibly, it allows for cheaper backplane PCBs by virtue of needing fewer high-speed differential pairs (by re-using the four existing ones that support the two SAS lanes), more or less halving the count of differential pairs per bay (the clock situation may vary a bit, so the exact count depends on the product). The catch is that the only way it can work is with an expensive tri-mode expander - you could throw away three of four ports per SATA/SAS device by connecting a controller directly to a bay, but then Broadcom can sell you a monster -32i card just so you can have 8 tri-mode bays. It also means server vendors can't later decide to cut out SAS and still offer SATA from the PCH or SoC, like they can with U.2 - they're stuck with SAS if they want to keep tri-mode U.3 bays.

tl;dr:
  1. Tri-mode expanders are more expensive than simple PCIe switches;
  2. U.3, tri-mode HBAs and tri-mode expanders reinforce each other, they make little sense without the other two
  3. For Broadcom, SAS is a "money printer go BRRRRR" product line, with likely high margins and widespread adoption

Do we feel that since the 9500-8i card is a true Gen4 card and increases the bandwidth due to supporting full 12 GB/s across 8 SAS/SATA drives is worth it or do we still recommend the 9300-8i/9400-8i Gen3 cards limited 8 GB/s across 8 SAS/SATA drives is enough?
Note: Of course this is theoretical and there is overhead so you might get 10-11 GB/s Gen4 instead of 6-7 GB/s Gen3.

However, if you convert from SAS/SATA to NVMe drives over time as they become cheaper doesn't the legacy 9300-8i/9400-8i cards become a bottleneck?

In other words do you future proof your initial installation to support converting to all NVMe up front using the 9500-8i or do you just wait?

Or are we saying that the a single 9300-8i Gen3 card would still support 3-4 NVMe drives and just add more 9300-8i/9400-8i Gen3 controllers as needed?

Lastly if you were building a new TrueNAS scale server with SATA/SAS drives would you use the 9300-8i or the 9400-8i or a 9500-8i controller?

I am building a new implementation and will be using a Gen4 motherboard. Also, most on-board HBA controllers like 3038 & SATA controllers share their connection with other channels/devices on the bus usually. So definitely going to put in HBA's on non-shared slots and not use the motherboard ports for the main drives in ZFS.

thanks
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
I've seen TrueCommand consistently report a read speed of 8-9 GiB/s while scrubbing a pool of 16 SATA SSDs (2*8-wide raidz2) attached to a 9305-16i. Not sure whether this is raw throughput (compressed data) or decompressed, but it is any case faster than I can actually use.
Or are we saying that the a single 9300-8i Gen3 card would still support 3-4 NVMe drives and just add more 9300-8i/9400-8i Gen3 controllers as needed?
You do not attach U.2 drives to a 9300 card. And not even to a TriMode HBA if you care about performance: Use PCIE switches instead.
 

Zedicus

Explorer
Joined
Aug 1, 2014
Messages
51
You do not attach U.2 drives to a 9300 card. And not even to a TriMode HBA if you care about performance: Use PCIE switches instead.
mostly true with a couple additional points. The Tri mode cards do use PCIE switches, BUT they have a limited number of lanes to begin with, and the 9400 series is only a PCIE Gen3.1 part. if you need bandwidth the 9500 series starts with the same number of lanes but is PCIE Gen 4.0 compliant so it doubles available bandwidth as long as the entire chain, down to the NVME drive, is 4.0.

really, though, that is outside the scope of 99% of people here. the 9300-8i or 9305-16i (AVOID the 9300-16i cards) are about the sweet point for pricing unless you just happen across a fire sale from someone gutting newer cards out of OEM servers on thiefbay.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Do we feel that since the 9500-8i card is a true Gen4 card and increases the bandwidth due to supporting full 12 GB/s across 8 SAS/SATA drives is worth it
No.
However, if you convert from SAS/SATA to NVMe drives over time as they become cheaper doesn't the legacy 9300-8i/9400-8i cards become a bottleneck?
The 9300s don't support NVMe at all and the 9400s might as well not support NVMe, with how crap the implementation is. So this is a fantasy scenario.
In other words do you future proof your initial installation to support converting to all NVMe up front using the 9500-8i or do you just wait?
There is no scenario in which attaching NVMe disks to a SAS HBA makes sense, outside of convoluted scenarios designed to line Broadcom's profits.
Lastly if you were building a new TrueNAS scale server with SATA/SAS drives would you use the 9300-8i or the 9400-8i or a 9500-8i controller?
Whatever's cheapest, with a preference for the 9300.
 

nabsltd

Contributor
Joined
Jul 1, 2022
Messages
133
Do we feel that since the 9500-8i card is a true Gen4 card and increases the bandwidth due to supporting full 12 GB/s across 8 SAS/SATA drives is worth it or do we still recommend the 9300-8i/9400-8i Gen3 cards limited 8 GB/s across 8 SAS/SATA drives is enough?
Unless you are using 12Gbps SAS SSDs, 8x drives can't saturate a PCIe 3 x8 connection.

PCIe 3 x8 is 8000 MBytes/sec. A single drive of spinning rust maxes out well below 300MB/sec, so the PCIe bus can handle at least 27 drives. For SATA SSDs, they max out at 550MB/sec, so the PCIe bus can handle 15 drives.

If you connect NVMe drives to a TriMode card, you'll get the same performance as 12Gbps SAS SSDs. But, why buy NVMe if you are going to limit the speed so much by connecting it via a TriMode card?
 

tdampier

Cadet
Joined
Feb 26, 2024
Messages
8
All good inputs thanks.

  1. Should you use an HBA for NVMe drives? No
    1. For NVMe use PCI-e switches
  2. Are the newer Broadcom/LSI cards worth the extra money? No
    • If you need more capacity add more 9300 style cards
  3. SAS SSD's would be the fastest for the HBA cards
    1. Would require 12GB/s Gen4 from the motherboard to the drives

Additional Questions:
  1. If you are adding a bulk of NVMe drives what PCI-e Switch has been used successfully under TrueNAS Scale if any?
    • What about backplanes that would support this bandwidth?
  2. If I use 12gb SAS SSD's as an upgrade path what HBA would be recommended?
    • Multiple 9300 style cards?
    • Others?
    • What 12GB/s SAS/SATA backplanes have people been successful with?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Are the newer Broadcom/LSI cards worth the extra money? No
  • If you need more capacity add more 9300 style cards
Probably expanders rather than additional HBAs...
If you are adding a bulk of NVMe drives what PCI-e Switch has been used successfully under TrueNAS Scale if any?
  • What about backplanes that would support this bandwidth?
Whatever is reasonable, there are no meaningful compatibility concerns. PCIe switches are not exotic and have long been well-supported.
If I use 12gb SAS SSD's as an upgrade path what HBA would be recommended?
Why would you? NVMe is cheaper and faster.
 

Zedicus

Explorer
Joined
Aug 1, 2014
Messages
51
the 9400 style is sometimes the lowest cost. and there is some benefit to at least having PCIE gen 3 support. This card would need firmware flashed, but at 42$ it is at least a contender.

9400 8i

multiples available for slightly more

power savings is also a factor for me. and a blanket "9300 is the best" is not something i would promote.
 

tdampier

Cadet
Joined
Feb 26, 2024
Messages
8
the 9400 style is sometimes the lowest cost. and there is some benefit to at least having PCIE gen 3 support. This card would need firmware flashed, but at 42$ it is at least a contender.

9400 8i

multiples available for slightly more

power savings is also a factor for me. and a blanket "9300 is the best" is not something i would promote.
So what you are saying is anything with the SAS3408 chip on it can be re-flashed from either HBA/RAID to a 9400 HBA?
--- Can i just use the stock LSI/Broadcom firmware or is there some specific version?
Is there any preference to brand or reliability?

Also, I would assume that it would be better from a DR perspective to have at least split your stuff into two HBAs instead of all on one?
 

Zedicus

Explorer
Joined
Aug 1, 2014
Messages
51
So what you are saying is anything with the SAS3408 chip on it can be re-flashed from either HBA/RAID to a 9400 HBA?
mostly, yes. some vendors are easier to deal with . The Lenovo 530 series is pretty easy to crossflash. The Lenovo 430 series is already configured to HBA mode.

Also, I would assume that it would be better from a DR perspective to have at least split your stuff into two HBAs instead of all on one?
if you are multi-pathing SAS disks, sure. there are a lot of variables in your statement though.
 

nabsltd

Contributor
Joined
Jul 1, 2022
Messages
133
the 9400 style is sometimes the lowest cost. and there is some benefit to at least having PCIE gen 3 support.
The 9300 series is PCIe 3.0, while the 9400 is PCIe 3.1. Although there are some minor improvements in 3.1, most slots that aren't PCIe 4.0 likely are only PCIe 3.0, so you wouldn't get the benefits.

As for power consumption, the 16-port variants of the 9400 are much better than the 9300, but the 8-port are essentially the same.
 

Zedicus

Explorer
Joined
Aug 1, 2014
Messages
51
The 9300 series is PCIe 3.0, while the 9400 is PCIe 3.1. Although there are some minor improvements in 3.1, most slots that aren't PCIe 4.0 likely are only PCIe 3.0, so you wouldn't get the benefits.

As for power consumption, the 16-port variants of the 9400 are much better than the 9300, but the 8-port are essentially the same.
even the 8 port cards have a few watts savings and on something that runs 24/7, why would you not?

my point was, especially for the 8 port cards, sometimes there is $0 difference between a 9300 and a 9400 based card. and if you need MORE than 8 ports, you really need to pay attention to what model you are getting, not just the price.
 
Top