Broadcom 9600-24i and ICY Dock MB699VP-B V3 Mobile Rack combination running well with TrueNas Dragonfish beta 1

rxs0

Cadet
Joined
Feb 21, 2023
Messages
2
Hi,
I just successfully added the Broadcom 9600-24i card connected to ICY Dock MB699VP-B V3 Mobile Rack via (2) OCuLink cables on my Dell T7920 TrueNAS system running TrueNAS-SCALE-24.04-BETA.1. I am trying to build an All-Flash NVME U.2 TrueNAS system that is quiet, reasonably fast, and “economical”, made mostly of spare parts that I already have. Under this configuration, each drive is getting PCIe 3.0 x 4 NVMe bandwidth, accommodating drives up to 15mm in height. Unfortunately I am limited by PCIe 3.0 speeds on my older Dell T7920 system. I have up to (5) available PCIe 3.0 slots on this Dual CPU system for HBA cards, allowing up to a theoretical maximum of 20 U.2 drives (if using 9500-16i cards) or 24 (if using 9600-24i cards), again each drive at PCIe 3.0 x 4. These cards do not use a PCIe “switch” like the P411W-32P cards. Obviously, these additional ICY Dock Mobile Racks would require a separate external 5.25 inch enclosure given the limited space of the Dell T7920 workstation. In my current setup, I have one ICY Dock mobile rack up front in the 5.25 slot and a second mobile rack in back lower part of the case, just below the power supply. Mounting mobile racks in the Dell case required some ingenuity, including 3M red double sided foam sticky tape and adhesive plastic shims. 3d printing a mounting structure may help in the back of the case.

At this point in time, we are somewhat limited in separate NVME backplane offerings since the UBM (Universal Backplane Management) standard by Broadcom is starting to take hold.

An alternative enclosure includes the Serial Cables enclosures as seen on recent Storage review article and YouTube video, website https://www.serialcables.com/.

https://www.storagereview.com/review/broadcom-megaraid-9670w-16i-raid-card-review

Supermicro also has some NVME backplanes that can be added to their rack servers. One such example is the SYS-220U-TNR server that Storage Review did a short video on.

https://www.youtube.com/watch?v=2kjtBvOYj6s

Lastly, 45 Drives is coming out with an All-Flash Stornado F2 server. They are using the Broadcom 9600-16i card in their current F2 Stornado and offer the server in both Intel and AMD variants. The AMD cpus can offer high count PCIe lanes with just one CPU.

https://www.45drives.com/products/nvme.php.

It will also be interesting to see some coming reviews of the iX systems upcoming F100 All-Flash TrueNAS server which runs TrueNAS Scale.

Compared to my other rack servers, this setup is very “quiet”, currently sitting in my office. With the ICY Dock fan speed to low (middle setting), the NVME drives remain relatively cool or slightly warm to the touch. The 9600-24i card however is running a bit hot and will likely require some additional cooling, possibly an additional small Noctua fan.

The Dell T7920 uses a 1400W power supply which provides adequate power for the (2) additional mobile rack units. Additional units may require extra external enclosure power supply.

Although the system is running well overall, I am unable to get the 9600-24i to post to BIOS like I am able to do with the 9500-16i cards. Fortunately, this is not necessary with the TrueNAS system. I tried changing the Dell UEFI and Legacy bios settings with no luck. Is this typical of the Broadcom 9600 HBA series? Is there a card option I can turn on or off? The card also works well with Windows 11 Pro. As a result of not posting to bios, I cannot boot off any disks attached to the 9600-24i card. The card otherwise loads well with the included Beta Dragonfish driver.

Although the 9600-24i card adds an additional SlimSAS connection for a total of 3 ×8 SFF-8654 ports, the peak speeds on this card (roughly 2000MB/sec write and 2000 MB/sec read on single drive in non-redundant stripe mode) are very similar to the 9500-16i card, which has (2) SlimSAS ports. Given the similar speeds (on PCIe 3.0 system at least) and large price differential (9600-24i $936.99 vs 9500-16i $340 on Amazon), it’s probably best to just use the cheaper 9500-16i in your build with multiple HBA cards. I am not sure how much speeds will improve on a PCIe 4.0 system.

Since Broadcom HBA cards do not offer any hardware RAID functionality, I don’t think there is a separate “IT firmware version” of these cards as they are already HBA only cards. BIOS is current on both cards.

I also have the P411W-32P card (somewhat dated last firmware update of 6/6/2021), which has already been documented to not work with the ICY Mobile Rack. I had the same experience. As mentioned in another post, do not flash this card with custom firmware provided by ICY Dock as another member of this forum bricked his card.

Lastly, I will also be testing the 9600-16i card in a week or two pending delivery.

By the way, be careful to buy only authentic Broadcom versions of these cards as there are a lot of cheap Chinese counterfeit “Broadcom” cards on Ebay which look almost exactly the same as the authentic card but can have firmware and other hardware limitations. The Art of the Server YouTube channel has several videos on this subject.

So far TrueNAS Dragonfish Beta 1 is super stable with no instability or detectable bugs so far.

“TrueNAS SCALE has inherited the storage functionality and automated testing from CORE. SCALE has matured rapidly and offers a more robust apps environment based on Linux Containers & KVM. TrueNAS SCALE is generally recommended for new users that need embedded apps, and will gradually become the recommended version for all new TrueNAS users.” As mentioned above, the iX Systems All-Flash F100 server will run on TrueNAS Scale.

https://www.truenas.com/blog/truenas-scale-dragonfish/

iX Systems and the Debian Linux development team did a great job with this release.

I will update this post as I continue to test and use this system.

Thanks,

Rich

My test system:

Dell 7920 Firmware 2.38.0 (Current)


Dual CPU Intel Xeon Gold 6258R @ 2.70GHz (2nd CPU adds 2 additional PCIe 3.0 x 16 slots).
512 MB Ram - NEMIX RAM 512GB (8X64GB) DDR4-3200 PC4-25600 ECC RDIMM Registered Server Memory Upgrade for Dell PowerEdge T550 Tower
(1) Mellanox MCX613106A-VDAT 200GbE Card, plugged into PCI slot #1, connected at 100GbE to Arista Switch
(2) Broadcom 9500-16i, slots #6 and #7
(1) Broadcom 9600-24i, slot #2
(4) DiliVing SlimSAS 8X to 2*oCulink 4X,SFF-8654 74pin to 2*SFF-8611 36pin Cable 80cm(Broadcom MPN 05-60001-00)
(2) ICY DOCK 4 x 2.5 NVMe U.2/U.3 SSD PCIe 4.0 Mobile Rack for 5.25" Bay with OCuLink | ToughArmor MB699VP-B V3
(2) U.2 NVME SSD Hard Drive Expansion Interface Backplane Kit Compatible with Dell Precision 7920 T7920 Tower Workstation 076W3N
(1)M.2 SAS Flex Bay Module Compatible with Dell Precision T5820 T5820XL T7820 T7820XL T7920 T7920XL 066XHV 66XHV w/Tray Without SSD only for M Key 2280 M.2 NVMe (PCIe Gen3 x4) SSD
(1) SAMSUNG 980 PRO SSD 2TB PCIe NVMe Gen 4 Gaming M.2 Internal Solid State Drive Memory Card MZ-V8P2T0B/AM (TrueNAS is installed on this M.2 drive.)
(4) Intel D5-P4326 Series 15.36TB U.2 NVMe/PCIe 2.5" QLC SSD Solid State Drive
(2) SOLIDIGM D5-P5336 30.72 TB Solid State Drive - 2.5" Internal - U.2 (PCI Express NVMe 4.0 x4) - Server Device Supported - 0.56 DWPD - 3000 MB/s Maximum Read Transfer Rate
Video Card VisionTek Radeon RX550 4GB GDDR5 (Plugged in the small PCIE Gen 3 x8 open ended slot #5 as I only need minimal video ability to view TrueNAS startup text and IP address.
As mentioned above, this system setup allows for a maximum of (5) Broadcom HBA cards.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
These cards do not use a PCIe “switch” like the P411W-32P cards
That's not a good thing, and no need for scare quotes.
At this point in time, we are somewhat limited in separate NVME backplane offerings since the UBM (Universal Backplane Management) standard by Broadcom is starting to take hold.
I don't know what you mean, U.2 backplanes without any tri-mode nonsense (to keep the terminology polite) are plentiful, if outrageously expensive.
An alternative enclosure includes the Serial Cables enclosures as seen on recent Storage review article and YouTube video, website https://www.serialcables.com/.
Not very scalable, though. And those prices...
Supermicro also has some NVME backplanes that can be added to their rack servers
All vendors do. The Dell R630 and R730 could take up to four U.2 disks in 2014.
Lastly, 45 Drives is coming out with an All-Flash Stornado F2 server. They are using the Broadcom 9600-16i card in their current F2 Stornado
Sounds like a terrible idea to me.
Although the system is running well overall, I am unable to get the 9600-24i to post to BIOS like I am able to do with the 9500-16i cards
That sentence does not parse. Do you mean that you cannot access the sucky Real Mode config application used to configure cards before UEFI was a thing? Because that would not be surprising at all, since - thankfully - Real Mode is entirely unnecessary in UEFI land. If you want to configure the card, you do so from the system firmware setup menu, which loads additional menus from the OpROMs it loads. No more "Press arcane key combination in the next 7.3 seconds to boot into an application that could technically run on a PC XT so you can configure your hardware".
Although the 9600-24i card adds an additional SlimSAS connection for a total of 3 ×8 SFF-8654 ports, the peak speeds on this card (roughly 2000MB/sec write and 2000 MB/sec read on single drive in non-redundant stripe mode) are very similar to the 9500-16i card, which has (2) SlimSAS ports. Given the similar speeds (on PCIe 3.0 system at least) and large price differential (9600-24i $936.99 vs 9500-16i $340 on Amazon), it’s probably best to just use the cheaper 9500-16i in your build with multiple HBA cards. I am not sure how much speeds will improve on a PCIe 4.0 system.
Neither card is a sensible choice - aside from the fact that that the SAS 9300 is technically discontinued - because tri-mode hardware is unmitigated garbage.
Since Broadcom HBA cards do not offer any hardware RAID functionality, I don’t think there is a separate “IT firmware version” of these cards as they are already HBA only cards.
I'm not 100% on the specifics, but the 9600 cards moved to a single software stack, derived from the SAS3 MegaRAID stack. So IT and IR modes are gone, and everything operates using the new version of what used to be the MegaRAID driver and firmware.
I also have the P411W-32P card (somewhat dated last firmware update of 6/6/2021), which has already been documented to not work with the ICY Mobile Rack. I had the same experience. As mentioned in another post, do not flash this card with custom firmware provided by ICY Dock as another member of this forum bricked his card.
That story. Not fun, and emblematic of the disaster area that is PCIe cabling.

So, I mentioned several times above how junky and terrible tri-mode stuff is. To that I will add the descriptors "overpriced", "scam", "Broadcom and Microchip pulling the wool over procurement people's eyes" and "performance drain".
Why so much vitriol? Simple, the key is that these are not super fast devices that bring compatibility with SAS and SATA while also giving all the benefits of NVMe. In fact, these overpriced pieces of junk present NVMe devices as SCSI devices to the system!

Why is this a bad thing? Because half the point of NVMe was to dramatically cut down latency over SCSI and ATA. This drains performance by having all the same latency of legacy disks, and also by sticking to the not-very-parallel data paths that SCSI uses. Ok, so that justifies the "overpriced" description, but surely "scam" is a bit harsh? Nope, it's a scam because "tri-mode" HBAs do not allow for efficient tri-mode bays (i.e. without wasting 3 of 4 lanes per bay) without a tri-mode expander. Who makes tri-mode expanders? Broadcom and Microchip. How expensive are they? Very. Oh, and these things drive U.3 drive bays, which don't work with U.2 disks!

So, how would I do this better? Quite simple, the answer is U.2, which has been around for a decade. Separate, dedicated paths for PCIe and for SAS/SATA. Yes, this does mean more cables and more PCB layers, but it works. Want SATA? Great, the PCH or SoC will handle that natively. Want SAS? Buy an HBA and plug it in instead of the SAS ports. Want NVMe? Plug in the PCIe cabling. Bam, tri-mode drive bays without the major disadvantages. Since SATA/SAS cabling and a few PCB layers are still cheaper than the monstrous amounts of money that Broadcom and Microchip demand for their "tri-mode" lines, it's also hard to speak of added costs. There may be an environmental impact, since there are more materials used, but it's not meaningful when compared to needing the tri-mode ICs.

There is one relevant catch: tri-mode U.2 is not DIY friendly because nobody seems to make a suitable generic disk enclosure. There's a lot of SATA/SAS and NVMe stuff to choose from, but nothing tri-mode that I could find. But on the other hand, everyone's been pumping out rackmount servers that take these disks for a decade now. Supermicro will even sell you backplanes that will fit in older chassis models.

And the key to this matter ultimately is as follows: SAS has no place in the fast storage market. NVMe is just superior in every way, except perhaps for the maturity of some parts of the software stack, and even that is probably solved by now. SATA works fine for a handful of HDDs. SAS is an okay solution for massive numbers of disks to be driven by a single server. For everything else, there's NVMe. And that's a problem for Broadcom and Microchip, because SAS can be very lucrative (just think of how many servers are probably being sold with "tri-mode" support because it ticks the NVMe requirement the procurement person was given, even though the performance is not that of NVMe), so they came up with tri-mode to try and keep the gravy train going.
 

beagle

Explorer
Joined
Jun 15, 2020
Messages
91
I've been looking at that ICY Docker enclosure, but still not sure what would be the best controller for a DIY solution using a X11SPM.

@Ericloewe What's your suggestion?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Controller is entirely the wrong word, since it is actively counter-productive to use tri-mode controllers.

If you do not have enough PCIe lanes natively, buy whatever PCIe switch fits your needs - there are many options to choose from, even though PLX/Broadcom dominated the high end for a long time. They're a lot simpler in the sense that they don't require a special driver (every OS has supported PCIe switches for decades).

Things to look out for:
  • The connectors used - plain SFF-8643 is probably the easiest option, but it's a Zoo out there and boy do a lot of places stink.
  • Older switches might not be super happy with things that SSDs care about but little else ever cared about - especially hot-plugging.
  • Power - these will require HBA levels of cooling air.
I've been looking at that ICY Docker enclosure
Be extra careful, experiences have been decidedly mixed. Part of that is due to the crap cable situation, part of that is due to Icy Dock.
 

beagle

Explorer
Joined
Jun 15, 2020
Messages
91
Controller is entirely the wrong word, since it is actively counter-productive to use tri-mode controllers.

If you do not have enough PCIe lanes natively, buy whatever PCIe switch fits your needs - there are many options to choose from, even though PLX/Broadcom dominated the high end for a long time. They're a lot simpler in the sense that they don't require a special driver (every OS has supported PCIe switches for decades).

Things to look out for:
  • The connectors used - plain SFF-8643 is probably the easiest option, but it's a Zoo out there and boy do a lot of places stink.
  • Older switches might not be super happy with things that SSDs care about but little else ever cared about - especially hot-plugging.
  • Power - these will require HBA levels of cooling air.
I'm looking for something like one of those 4x M.2 to PCIe x16 cards but for 4x U.2 instead (re-timer?).

The X11SPM-TPF has 2x PCIe x16, so I was planning to use one of the slots for 4x U.2 drives instead of 4x M.2.

Be extra careful, experiences have been decidedly mixed. Part of that is due to the crap cable situation, part of that is due to Icy Dock.
I've seen some poor reviews on for some of the Icy Dock models, but not for this particular one. Would you mind expand on the "crap cable situation"?

I saw a video Wendell from L1 Techs explaining some of the challenges he faced using U.2 drives on DIY solutions. Is that what are you referring to?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Would you mind expand on the "crap cable situation"?
There are a million different connectors. At least two competing pinouts for SFF-8643, Oculink, SlimSAS 4-lane and 8-lane, whatever the hell Broadcom is using on the 9600s, and I'm sure there's more.
I'm looking for something like one of those 4x M.2 to PCIe x16 cards but for 4x U.2 instead (re-timer?).
It depends on the specific situation. There's passive, redriver (fancy amplifiers), retimers (fancy digital amplifiers that reconstruct the signal before sending it out again) and switches (they switch). For PCIe 3.0, you can get away with a lot.
 

rxs0

Cadet
Joined
Feb 21, 2023
Messages
2
After a few days of running this configuration, I noticed that the 9600-24i card runs really hot, consuming 20W of power compared to the 9500-16i which consumes 8.9W of power. This is evident in the size of the huge heatsink. This 9600 card will likely requiring supplemental fan to keep card cool.

Also, this card does not "post to bios" or display bios splash screen like the 9500-16i card does. The same is true for the 9600-16i card. Although both the 9600-24i and 9600-16i cards work well with TrueNAS Dragonfish Beta 1 one loaded, I have been unable to boot off drives directly attached to these 9600 series cards. Although not a deal breaker, this is somewhat frustrating given the high price tag of this card. I believe this was first mentioned in the post below.

Lastly, the price is somewhat steep given I am getting similar speeds (roughly 2000 mb/s write and 2000 mb/s read with IcyDock MB699VP-B V3) when compared to the 9500-16i card in TrueNAS system running on a PCIe 3.0 Dell T7920 workstation. The advantage of this card is off course the additional slimSAS internal connector (for a total of 3 SlimSAS 8i SFF-8654 connectors) and potentially faster speeds in PCIe 4.0. Because of these reasons, I am continuing my all-flash TrueNAS build based on the 9500-16i card, which is likely the sweet spot in my all-flash TrueNAS build.

In addition to the multiple Broadcom HBA cards, I have also connected all available Dell T7920 NVME kits to the motherboard directly. This also allowed me to boot off the motherboard direct miniSAS connections without taking up a PCIe slot using M.2 drive adapter card. There are a total of (4) miniSAS connectors on one of the system boards, one of which I use to boot off TrueNAS scale operating system using an M.2 NVME in a Dell M.2 SAS Flex Bay Module and the other three used as U.2 drives in U.2 NVME SSD Hard Drive Expansion Interface Backplane Kit. Again, I have used Dell official kits for these U.2 NVME drives. Surprisingly, I am getting very comparable speeds, i.e., just about 2000mb/s read and 2000mb/s write speeds on AJA benchmark out of the system board direct miniSAS connections using Dell Kits and cables. So at least for my Dell T7920 PCIe 3.0 system, the "direct" motherboard connections are comparable to the Broadcom cards. These results remained similar with the Intel and Solidigm U.2 ssd drives and Samsung 980 Pro M.2 ssd drive.
Thanks,
Rich
 
Top