TrueNAS Core & Broadcom

Pogdy

Cadet
Joined
Jan 28, 2023
Messages
9
Does the FreeBDS version TrueNAS Core is based on have support for Broadcom BCM57416? Would be amazing if it does since that would free up a pcie slot for an HBA for a JBOD.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Regardless of whether it has (airquotes) "support" (theoretically supported by the bnxt driver), the Broadcoms have a very dodgy history and are strongly advised against. Additionally, 10GBase-T is not a recommended technology.
 

Pogdy

Cadet
Joined
Jan 28, 2023
Messages
9
SFP+ it is then, I'll have to lose 2 of out f the 24 bays I think :(
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
SFP+ it is then, I'll have to lose 2 of out f the 24 bays I think :(

Could you please explain that? There's no good reason that using an additional PCIe slot for a network card should deprive you of bays, at least not in general. If you are limited by HBA ports, such as a 16 port card and then six mainboard SATA or something like that -- the most obvious way to come up with 22 that I could think of -- you should be aware that an SAS expander may be an option. These are add-in "cards" that don't actually need a PCIe slot, just power, and can be powered in a variety of ways. They act sort of like an ethernet switch but for SAS.
 

Pogdy

Cadet
Joined
Jan 28, 2023
Messages
9
Whoops meant to write 4, taking me to 20/24. I basically have these 2 options for my 24 2.5" NVME server:

- https://www.supermicro.com/en/Aplus/system/2U/2114/AS-2114S-WN24RT.cfm
- https://www.gigabyte.com/uk/Enterprise/Rack-Server/R272-Z32-rev-A00

It's to my understanding that if I utilized all 24 bays I would be left with 1 usable slot which would be taken up by a 10 Gbe SFP+ NIC if I am unable to use the onboard networking. The purpose of this server is to house our live projects for Production and act as the server head for a JBOD that houses it's backup and archive projects so I need 1 HBA card as well. I have to lose 4 drives to be able to use a NIC and a HBA in at the same time right?
 

Pogdy

Cadet
Joined
Jan 28, 2023
Messages
9
Btw a bit unrelated but do you think all nvme storage is a good idea for a 10Gbe capable LAN? Don't think a network upgrade is on the tables anytime soon so I'm thinking of maybe going down to SAS SSD instead.
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Btw a bit unrelated but do you think all nvme storage is a good idea for a 10Gbe capable LAN? Don't think a network upgrade is on the tables anytime soon so I'm thinking of maybe going down to SAS SSD instead.

I'm not familiar with the servers you mentioned above, at least not enough to have any commentary that I'd be comfortable making.

However, I am quite comfortable poking at this question. It comes down to this: if you have an array of 24 drives, how fast can that go?

For SATA HDD, if we were to consider a lower end IOPS of 50 IOPS doing 4KB each, times 24 drives, that is 4.800MBytes/sec or about 40Mbits/sec.

For consumer SATA SSD, let's pretend that the numbers normally presented (100k IOPS? really?) are bull. I can definitely see these sustaining 5000 IOPS, again doing 4KB each, and 24 drives winds you up at 4000Mbits/sec.

So first let's acknowledge that these ought to be pessimistic numbers. Highly pessimistic numbers. It should almost always go faster. Often MUCH faster. Especially if you had something slightly better than consumer SATA SSD. But pay attention to claimed and benchmarked numbers. If you were willing to fill an array with 24 modern consumer SATA SSD's, I have trouble seeing how you wouldn't be able to flatline the thing at 10Gbps in normal operations.

That's my perspective. Take it for what little it is worth.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
It should be able to exceed 10Gbps; if you only have a 10Gbps interface, then that means it should flatline at 10Gbps.
 

Pogdy

Cadet
Joined
Jan 28, 2023
Messages
9
Ah right, that was my worry basically. So my options are to go with that all flash config and have the potential for network upgrades in the future or 'downgrade' to a server that has 24 hotswap SAS support instead and go with that, that would allow me to use an HBA and NIC without losing any bays as well I think. Something like this:

 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
So yes that appears to include a 12Gbps SAS expander, so that's probably a viable option. And that means support for 12Gbps SAS devices, which in turn means that when you put a 12Gbps SAS HBA in there, you get 48Gbps from the HBA to the SAS backplane (because there's four lanes in an SFF-8643 cable).
 

Pogdy

Cadet
Joined
Jan 28, 2023
Messages
9
Yeah, although all flash would have been amazing I don't think it makes sense for us due to our 10Gbe LAN and the need to have at least 1 JBOD running off this server. I think I'll go down the SAS SSD route instead.

Thanks for all the help!
 
Top