High count M.2 NVMe HBA cards?

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Wasn't the point of NVMe to finally get rid of YAAL (yet another abstraction layer)? I agree this is really bad architecture.
That was half of it. The other half was to mooch off of an existing physical interface to avoid the cost of developing SATA 12 Gb/s. Funny how we went full circle from "attach the disk's controller directly to the system bus" to "crap, we need an interface between them" and now back to "attach the disk's controller directly to the PCIe bus".
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
I came here looking for an answer I cannot find and I will also contribute to the original post.
[..]
Copilot gave me contradicting results, which isn't surprising considering I couldn't find a single bit of evidence of a test or claim.
There is a lot of "AI" bullshit here. Please stop polluting the forum with this. It is just a glorified version of copy-paste and for non-trivial topics, such as this, the results are really poor.
If you have deep pockets, there are raid NVMe cards - considering people are discussing bandwidth - and no, I don't believe mixing raid and ZFS will be a bad idea in every scenario, but I know some people will complain it is always bad. It could be that the bad scenarios have bad configurations.
Well, feel free to believe whatever you want. As the saying goes "believing means not knowing".
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
I don't believe mixing raid and ZFS will be a bad idea in every scenario
You should deepen your understanding of ZFS or at least explaining why you think so before making such... bold statements.
 

beagle

Explorer
Joined
Jun 15, 2020
Messages
91
(...)
Of course, you can just get a board that has the menu enabled. Dell, for instance, updated the Gen 13 systems to expose this option to the user for all slots halfway through the lifecycle of the systems. (...)
Unfortunately not on the tower models. At least not on the T630 :frown:
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Really? How weird. I guess they never formally supported U.2 disks in those?
 

beagle

Explorer
Joined
Jun 15, 2020
Messages
91
Really? How weird. I guess they never formally supported U.2 disks in those?
They supported with a PCIe SSD kit that was only available when order from factory:

Hard drives​

The PowerEdge T630 system supports:
  • Up to eight 3.5 inch, internal, hot-swappable SAS, SATA, SSD, or Nearline SAS hard drives and four Dell PowerEdge Express Flash devices (PCIe SSDs) . Hard-drive slots 0 through 7 and 0 through 3
 

firesyde424

Contributor
Joined
Mar 5, 2019
Messages
155
There are NVMe HBAs, both raid and non RAID. Some don't offer raid for NVM-E, though I believe some do, or at least some documentation seem to indicate, as does some videos showcasing hardware raid for NVM-E devices.
Technically, you are correct. Most tri-mode cards CAN operate in a passthrough mode, but it's an obfuscation at best and an outright performance bottleneck at worst. You are dramatically hobbling the throughput and latency of your NVME drives so that you can have the option of using SAS\SATA drives with the same card. Speaking from experience, stay away from the tri-mode adapters. If you must have both NVME and SAS\SATA in the same server, use a separate SAS\SATA HBA and either use a PCIE switch if you don't have enough lanes for the NVME drives, or connect the drives directly if you have enough lanes. It simplifies your setup, removes an unneeded layer of obfuscation, and puts you on a more proven\reliable set of technology.
 
Top