genuinely asked for an explanation to a recommendation
I tried to explain it was just a headsup for some things to checkout for yourself and not a recomendation to anything byond just "go check out if this might suit your needs". Nothing to recommend.
But here's your explanation:
24 is more than 16.
Nothing more nothing less. Just like I stated before: Some people might want some more ports
Lack of PCIe slots is not the main problem, short in available PCI-e lands in APU is.
I think you meant "lanes" and auto correct "fixes" it?
I think you meant south, and that would be 4... which are often also used for one or two pcie or m.2 slots.
However, either way, there is not enough lands for your 2 HBAs and NIC to run at 8x each.
8 sata drives isn't going to saturate 4x pcie 3.0 much... Just because the card supports 8x, doesn't mean it really needs one.
I'm assuming he is using sata though, considering how his build seems to be "as cheap as possible" (which isn't a bad thing)...
a single 10gbe port is slightly more than 1x pcie 2.0 (connectx-3)
If you want to have a few decent performing device controllers and some NVME or PCIe SSDs
He doesn't want to use sata multipliers it seems, so a controller isn't going to push more than the disks attatched, which is 8.
He also doesn't list any NVME or PCIe SSD's, so we can assume none.
the server grade CPU and mobo combo is no longer unnecessary
Having lots of NVME or PCIe devices doesn't seem the goal of this build.
Besides "server grade", has nothing to do with the amount of PCIe lanes.
Server grade doesn't mean "higher end platform", there are a lot of server grade motherboards for lower end platforms available.
And the stability and durability/reliability of a system that's designed to run 24/7 for years after years is something you can't expect from a consumer grade system
Afaik, most devices (regardless of grade) have a peak in failures in the first few months and after a few years.
Those failure rates are often (slightly) lower on server-grade hardware... but the main argument for server grade is support (contracts), good (tested) drivers and large QVL's. All of which aren't really relevant to a cheap-home-nas build.
That being said: There are server-grade boards for lower end systems available that aren't much more expensive than a good consumer motherboard. While I don't think the difference is huge, it can be a risk and I agree buying good hardware might prevent issues along the way. (if it costs just a little bit more, it's worth it, but don't overdo (or overexpect from) it)
TLDR: If you can get one, get one...
Anyway, if you like to go with consumer grade rig and use the 3 cards listed, I suggest to pick a CPU/mobo with more PCIe lands, not an APU.
24 lanes (the consumer and low-end-server default), is enough for what he wants. But it depends on the motherboard.
I can push 8(16) to a HBA, 4 to an NVME drive, 4 to the (mostly unused) southbridge (which sends it to another 4x slot, of which barely 1x is saturated).
I'm limited by the slots in my case, not the lanes. With another slot, I couldrun 2 HBA's, 1 NVME and 1 NIC just fine.
Don't worry about putting a NIC in the southbridge controlled PCIe slot either: a it's about 1.3x per 10gbe port, if you aren't using any io/throughput heavy onboard services, you wouldn't have much issues.
To conclude my story:
The reason I did respond to
@wl714 this thorough is because I do want to help/educate people. I simply do not always have the time. So I have to pick which answers I spend most time on. Attacking me for not being assisting enough is bad form, no mater how many personal attacks you add to it.