I came here looking for an answer I cannot find and I will also contribute to the original post.
The answer I am looking for is whether I can use a NVMe HBA to connect PCI-E devices to a computer (besides NVMe), such as a network card.
Reason 1: A nice recent find is PCI-E 3.0 x16 PLX board for £200, though I cannot find now and I think it was an import from Germany on eBay; A NVMe HBA PCIe 4.0 x8 costs £150 (Cheaper, same max bandwidth, also can connect NVMe/SAS/SATA).
Copilot gave me contradicting results, which isn't surprising considering I couldn't find a single bit of evidence of a test or claim.
Back to the original post.
If I'm not mistaken that ASUS hyper m.2 requires bifurcation (typically exclusive to expensive motherboards, but often is not available)
A NVMe HBA allows more NVMe devices than there are lanes, which suggests there is a switch and does not require bifurcation.
Why would you want more NVMe devices than there are lanes or even more than (max lanes)/(4x lane connection)?
It is unlikely you'll need to run all devices at max throughput concurrently.
I'm not sure how small operations would effect bandwidth, but presumably small operations have a much lower capacity and will fail to saturate anyway near the bandwidth of a large file transfer, much less all 4 NVMe drives trying to saturate 16 lanes concurrently. Furthermore, we are talking about ZFS, aren't we? So, the filesystem will further limit the capabilities to saturate the bandwidth.
Won't there be latency? Always
Will it make a difference? I doubt it.
To my recollection, there are enterprise PCI-E switches, presumably extremely expensive to capitalise on the enterprise market, but if they weren't useful they probably wouldn't exist because you'd simply buy more lanes if the switches are going to be expensive anyway.
As an example
The Broadcom PEX89144 switch allows data center and cloud providers to build hyperscale compute systems supporting ML/AI and server/storage applications.
www.broadcom.com
shows a switch capable of 144 PCI-E 5.0 lanes - though, it will be likely these will be setup very differently to plugging it into a PCI-E slot
Maybe they are useless and they just want to sell their product? Possibly.
So, as an overall summary for connecting multiple NVMe drives to a single slot you have
bifurcation, such as asus hyper m.2
switches, such as the PLX board £200 PCIe 3.0 x16 (sorry can't find it) or the raspberry pi hat £32 PCIe 2.0 x1 to 4 x1 lanes (super slow for this configuration)
HBAs with switches inside
If you have deep pockets, there are raid NVMe cards - considering people are discussing bandwidth - and no, I don't believe mixing raid and ZFS will be a bad idea in every scenario, but I know some people will complain it is always bad. It could be that the bad scenarios have bad configurations.