This was indeed the question (and I think the answer is that each drive has its 6 Gb/s link, of which it may actually use 150-200 MB/s, and then HBA will then repackage everything into the x PCIe lanes), but we're already seeing a sub-question with NVMe drives.
Yes. Can you please verify the calculation below?
1. The normal SATA III is capable of 6Gb/s which means 600MB/s. HDDs can go up to 300MB/s (enterprise). So, for my use case, 16x300MB/s = 4800MB/s or 4.8Gb/s approx. So, as the bandwidth for the HBA (LSI9400-16i) is 12Gb/s, which means one can connect up to 40HDDs with almost zero impact on the HDD speed. Is that correct?
2. Considering a normal SATA SSD which is capable of 500-600MB/s approx. If i try to install up to 20 SSDs:
20x500MB/s = 10000MB/s or 10Gb/s approx. I'm under the limit and probably no impact on either of the SSD speeds. Or is my calculation wrong here?
You could, with one of the "Tri-mode HBA" that Broadcom wants to sell you for the usual hefty amount to keep their HBA business afloat. But they somewaht suck at that, and I think you cannot use the tri-mode for SAS/SATA and NVMe at the same time. Better stay with a good old (and cheap) 9200/9300 for spinners and direct PCI lanes (or a PLX) for the NVMe.
Hmm. So i guess one mode at one time then? Interestingly, the 9400=16i datasheet says, you can use all together.
PLX chips are PCIe switches: They take X lanes on one end and serve Y lanes on the other, just like your Ethernet switch.
Here is an example, which could serve your eight U.2 for a single x16 slot:
https://nl.aliexpress.com/item/4000029811733.html
The product link you attached, i looked for a same when i was figuring out the U.2 Drives. It was from High Point. I have used their RAID cards in past, quite reliable. But it is also SFF8643 so how different is this from a normal HBA Card from LSI which also offers 8643 SAS ports?
Fair enough. What's the board you're looking at?
From what you describe, you need (at least) 8*4=32 lanes for the "fast" NVMe pool, plus enough SATA ports or a x8 slot for a 9200/9300 HBA for the "slow" spinning pool. That's typical Xeon Scalable/EPYC territory.
For instance, a
X11SPM-TPF provides:
OMG. I'm planning to buy the same board you had in mind.
In short, here's what i plan
18x16TB SATA HDD
8xU.2
Plus, i want to reserve a slot for future to upgrade the network to 25GbE or 40GbE. I can do that even now once i figure out the rest of the hardware.
How do i figure out the bandwidth and the lanes?