Broadcom 9600W-16e not supported (yet)?

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
How are you handling sharing with the client? SMB?
 

zormik

Dabbler
Joined
Mar 6, 2023
Messages
20
SMB, but i doubt it's related to that as when i copy the file a 2nd time and it's in the zfs cache (i have 128GB memory) it maxes out the NIC bandwith of the client (5Gbps)
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Actually, it sounds like it may be a CPU bottleneck in Samba (possibly Linux ony? Did you try Core on this setup?), since a second transfer "doubles" the throughput.

If it were storage, throwing additional I/O at the pool would likely reduce throughput rather than improve it.
 

zormik

Dabbler
Joined
Mar 6, 2023
Messages
20
I doubt it, since this system is a brand-new overdimentioned server with an Intel® Xeon® Processor D-2733NT.
What i meant before is that on the first time i copy a file, it goes up to about 100MB/s. Once it's finished and i try it again, it maxes out the NIC bandwith. Therefor i presume it's I/O related on storage level. since the only difference is that Truenas Scale is reading it out from memory instead of the raidz1 pool on the second copy of the same file.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Once it's finished and i try it again, it maxes out the NIC bandwith.
Ok, yeah, that's very different from what I'd understood. You're not going to get fantastic performance out of a RAIDZ vdev of HDDs, but 140 MB/s is pretty puny.
How full is the pool?
 

zormik

Dabbler
Joined
Mar 6, 2023
Messages
20
Ok, yeah, that's very different from what I'd understood. You're not going to get fantastic performance out of a RAIDZ vdev of HDDs, but 140 MB/s is pretty puny.
How full is the pool?
Not even halfway, and one disk is supposed to reach 280MB/s.
On top of that my old server (who performance wise doesn't even come close to my new server) reached 400MB/s easily with truenas core.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
and one disk is supposed to reach 280MB/s.
Well, that may be a bit optimistic, but the performance of the whole pool is underwhelming. Might something else be eating away IOPS and making the whole thing suck?
 

zormik

Dabbler
Joined
Mar 6, 2023
Messages
20
Well, that may be a bit optimistic, but the performance of the whole pool is underwhelming. Might something else be eating away IOPS and making the whole thing suck?
It can't be, it's not a virtualized system like my previous system. It's a dedicated NAS with nothing else on it. On top of that, the speed is consistent.
This is the system:
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Even if it is nominally supported, it is a terrible idea. Everything has changed in those controllers: IC, firmware and driver, so expect the whole stack to be very immature - the same happened to the 9211 and 9300, despite each being a smaller change, and even the 9400 and 9500 are subject to growth pains. You gain zero real benefits (unless you're using NVMe SSDs, in which case you sound see better performance than on the 9500, but still worse performance than just connecting the SSDs directly to the host) and get the chance to spend more money.
 

geekgeek

Cadet
Joined
Nov 21, 2023
Messages
3
Even if it is nominally supported, it is a terrible idea. Everything has changed in those controllers: IC, firmware and driver, so expect the whole stack to be very immature - the same happened to the 9211 and 9300, despite each being a smaller change, and even the 9400 and 9500 are subject to growth pains. You gain zero real benefits (unless you're using NVMe SSDs, in which case you sound see better performance than on the 9500, but still worse performance than just connecting the SSDs directly to the host) and get the chance to spend more money.
Many thanks for your advice. I agree that NVME SSDs should be directly connected to the motherboard. For my use case, I have been using a 9405w-16i (https://docs.broadcom.com/doc/12380766) for connecting to SATA SSDs via a 24-bay Icy Dock MB9241P-B (https://global.icydock.com/product_289.html).

As the 9405w-16i (PCIe 3.1 x 16) has only 16 ports whilst 9600-24i (PCIe 4.0 x 8) can be connected to 24 SATA SSDs using one single card alone, I am exploring if this is a better way to do so (less cards used and more importantly, less PCIe lanes). Internet searching brought me to this post.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I suspect a SAS expander plus a 9300 would be cheaper and just as good.
 

tAxO33

Cadet
Joined
Jan 28, 2024
Messages
2
Has anyone tried whether 9600 series is supported under Scale (Cobia)? Thanks
The mpi3mr driver is not available under Cobia, but it appears to be available in Scale 24.04 Dragonfish (and 24.10 Electric Eel). I have 2 9600-24i cards I want to use...
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
have 2 9600-24i cards I want to use...
But why? The whole software stack is new (meaning immature) and there's very little to be gained from SAS4...
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Hey, your money to waste, not mine. My point is that these cards really need to be considered a total reset, more so than the SAS2 -> SAS3 transition was, and that one was fairly painful despite a few advantages (SAS2.5 controllers supported the new RAID stack and the mrsas driver, the IT stack was a straightforward evolution, there was a lot of interest around the new controllers, etc.) - and thus treated similarly to whatever Microchip is selling these days.

Also, not a 9600 invention (the tri-mode SAS3 controllers are just as bad), but the NVMe support is a bad prank pulled on project teams by taking advantage of less-than-fully-ivested-in-this-space procurement people (I don't want to blame these people, this "feature" is actively hostile). Half the point of NVMe was to get rid of anything between the PCIe bus and the SSD controller and the other half was to cut latency by removing crufty layers of software needed to support the SCSI command set and SAS HBAs. Presenting NVMe SSDs as SCSI devices is a farce and Broadcom wears the mummer's motley well.
 

geekgeek

Cadet
Joined
Nov 21, 2023
Messages
3
The mpi3mr driver is not available under Cobia, but it appears to be available in Scale 24.04 Dragonfish (and 24.10 Electric Eel). I have 2 9600-24i cards I want to use...
Thank you for sharing. Please kindly share testing results in future, if possible.
 

bbott

Cadet
Joined
Aug 21, 2016
Messages
3
I need an additional HBA controller, I currently have an LSI 9400 which I am very happy with. I have a good offer for a LSI eHBA 9600-24i controller. So I wanted to ask about the eHBA 9600 support in TrueNAS Core.
Technically, the 6 PIN power connector of the 9300 is particularly annoying, higher power consumption, also the not full height of the PBC is not so nice.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
So I wanted to ask about the eHBA 9600 support in TrueNAS Core.
Non-existent for all practical purposes.
I need an additional HBA controller
It sounds like you're going about it the wrong way. How many disks do you need to connect and what specific HBA do you have now (i.e. how many ports)?
 
Top