Supermicro 4U Ryzen Build – PCIe Lane Usage on AsRock Rack X570D4U-2L2T

ksnell

Cadet
Joined
Feb 3, 2021
Messages
4
It’s time to replace my Plex/VM/file server and could use some guidance. I am based in the US so everything would be sourced from Amazon, Newegg, eBay, etc.. I like the performance, power draw, and value of Ryzen CPUs and would like to modify a Supermicro 4U 24 bay chassis to accept an ATX power supply with a custom rear window, swapping the fans and having a quiet, powerful homelab server. I’m primarily interested if I can squeak by with enough PCIe lanes with Ryzen without having to jump the budget up substantially to Threadripper, EPYC or Xeon to get similar performance but have more pointed questions below.

Requirements:
- ESXI/TrueNas Core All In One
- 10-15 VM’s (TrueNas Core, Plex, large Docker VM, and others) on M.2 NVME SSD
- Large storage pool managed by TrueNas Core. HBA passthrough directly to TrueNas Core VM.
- Future proof with 10Gbe networking
- Ideally a Passmark score above 35k + GPU so transcoding 4k streams is never an issue when needed.

Specs:
OS: ESXI with TrueNas Core AIO
Case: Supermicro 24 bay 4U with SAS2-846EL2 backplane
Motherboard: AsRock Rack X570D4U-2L2T (open to suggestions but this seems to hit most of the marks)
CPU: Ryzen 9 5900x or Ryzen 9 5950x
CPU Cooler: undecided
Case Fans: undecided
ESXI Boot Device: Samsung 32GB USB Flash Drive
Storage: 6x18TB HDDs (RAIDZ2 with the ability to add 3x more vdevs in the future.)
M.2 Slot 1: 2TB Samsung NVME M.2 (For VM’s)
M.2 Slot 2: Empty at first, but are there enough PCIe lanes left to use this down the road without being bottlenecked?
RAM: 4x32GB (undecided model, but current server is 48GB and it is not enough and I am not even currently running TrueNas Core)
HBA: LSI 9207-8i or equivalent (open to suggestions)
GPU: Nvidia RTX 3060ti
PSU: EVGA 850w P2 or 1000w P2
L2ARC: Open to suggestions but from what I read this is not really necessary for Plex streaming, especially with the amount of RAM I plan to have.
SLOG: Open to suggestions here. Is this needed with 10Gbe NICs?

Questions:

1. PCIe Lanes - Does this motherboard/Ryzen combination have enough PCIe lanes for this setup? Between the GPU, HBA, PCIe 4.0 SSD, 10Gbe onboard NICs, I feel like it is cutting it close, and not sure if it leaves any room for future expansion if I need more SSD space (via second M.2 slot or SATA ports). This review of the motherboard breaks it down a little but I am still a bit confused.
[ASRock Rack X570D4U-2L2T Review an AMD Ryzen Server Motherboard](https://www.servethehome.com/asrock-rack-x570d4u-2l2t-review-amd-ryzen-server-motherboard/)

2. GPU Bandwidth - Utilizing both the x16 and x8 slots cuts them both down to x8. Is this a problem for the GPU? There is a lot of discussion online about the 3060ti working just fine on PCIe3.0x16 vs PCIe4.0x16 (1%-2% degradation) but not a lot about x8. Seems to me PCIe4.0x8 theoretically equals PCIe3.0x16 in bandwidth so it should be fine?

3. Thoughts on the vdev arrangement? It is painful losing 2x drives to parity PER 6 drive vdev when coming from hardware raid where you can add drives one at a time.

4. Any other tips/things that stand out would be appreciated!
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
What's wrong with server PSUs? The noise will come from the case fans, and in a server you may not have any option to replace them with quiet fans AND still cool your drives.
The only contention on PCie lanes is that the second M.2 comes from the chipset, and shares a PCIe4.0*4 link to the CPU with the network. With a LSI HBA (probably required to pass the drives to TrueNAS through ESXi), the drives are already on the CPU.
A L2ARC is only useful if you expect to repeatedly serve the same content. At least, build and see what the ARC hit ratio is before considering a L2ARC.
SLOG is only useful for sync writes (databases or NFS). There is no need to implement any possible feature of ZFS.

Other than using AMD Ryzen, which are not yet recommended over Intel CPUs, your list looks about fine. Consider a step down to Ryzen 3xxx over Unobtainium 5xxx because a NAS does not need the latest and greatest hardware.

With 18 TB drives, RAIDZ2 is the bare minimum for data safety. Consider up to three 8*18TB vdev if it hurts a little less. A hardware RAID would still cost you 2 drives for parity per set of drives in RAID6.
Why RAID 5 stops working in 2009.
And this is 2021, with drives a decimal order of magnitude larger on basically the same URE rate…
 

ksnell

Cadet
Joined
Feb 3, 2021
Messages
4
What's wrong with server PSUs? The noise will come from the case fans, and in a server you may not have any option to replace them with quiet fans AND still cool your drives.
I don't mind server PSUs too much, especially the PWS-920P-SQ quiet models from Supermicro but they tend to lack the connection types needed for some of the desktop grade items I plan to use (like GPU). Or maybe I am just missing something :).

I do have a couple of those SQ PSUs in a Supermicro 2U (that is being replaced) and I definitely agree about the fans. It has the 80mm fans I never tinkered with and they SCREAM. I plan to do something like the below link. The 4U chassis has more space to remove the entire fan wall and replace it with 120mm fans which generally are quieter than the 80mm's. This person added a custom fan wall up front as well. It's certainly going to be something that I'll monitor and adjust (quiet vs. fan speed), but it is not going to live in a data center so sound must be addressed.

Making a quiet Supermicro SC846 - 100 TB file server - YouTube

The only contention on PCie lanes is that the second M.2 comes from the chipset, and shares a PCIe4.0*4 link to the CPU with the network. With a LSI HBA (probably required to pass the drives to TrueNAS through ESXi), the drives are already on the CPU.

Is there any way to know how much performance would be lost on the second M.2? Does this mean if the 10Gbe NICs are being hammered, the second M.2 would essentially bottleneck to a crawl? If that is the case, I guess a SATA SSD in a SATA port might be a better expansion option?

Yes that is the plan with the HBA. Straight hardware passthrough to the TrueNas Core VM so it has full control of the drives attached from the backplane. Biggest disadvantage to this method from what I can see is at least with this motherboard there is no way to passthrough the SATA ports on the board. Not really necessary for me as of now because of the HBA/backplane but certainly something to consider with a setup like this. I don't know what that would mean if I ever wanted to add L2ARC or SLOG...can it come directly from the TrueNas Core boot device or does it have to be passed through as a separate space/partition?


A L2ARC is only useful if you expect to repeatedly serve the same content. At least, build and see what the ARC hit ratio is before considering a L2ARC.
SLOG is only useful for sync writes (databases or NFS). There is no need to implement any possible feature of ZFS.
This was what I suspected. Start with zero and go from there at least for my use case. Thanks.


Other than using AMD Ryzen, which are not yet recommended over Intel CPUs, your list looks about fine. Consider a step down to Ryzen 3xxx over Unobtainium 5xxx because a NAS does not need the latest and greatest hardware.

Yeah I saw in this hardware guide they are less optimized but I didn't really know they were not recommended. Hoping support improves...

They say:
AMD CPUs are making a comeback thanks to the Ryzen and EPYC (Naples/Rome) lines but support for these platforms has been relatively limited on FreeBSD and, by extension, FreeNAS. They will work, but there has been less run time and performance tuning.

Ha, "Unobtainium 5xxx". My original idea was a 3900/3950....but I have been thinking about this for so long they up and released the next gen. A 5xxx is more about headroom for the VMs and transcoding video, but it is certainly under consideration.

With 18 TB drives, RAIDZ2 is the bare minimum for data safety. Consider up to three 8*18TB vdev if it hurts a little less. A hardware RAID would still cost you 2 drives for parity per set of drives in RAID6.
Why RAID 5 stops working in 2009.
And this is 2021, with drives a decimal order of magnitude larger on basically the same URE rate…

I am definitely not considering RAIDZ and if the data were in more of a production environment RAIDZ3 would be under consideration. Vdevs are new to me so getting that breakout right is the concern. Sure you lose 2 drives with hardware RAID6 but it is essentially 1 vdev (since they don't have vdevs) so when you expand, you still are only losing 2 drives for the entire array. With this 6x18TB Trunas setup, I would lose 2 drives per 6 drive vdev right?
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
I don't mind server PSUs too much, especially the PWS-920P-SQ quiet models from Supermicro but they tend to lack the connection types needed for some of the desktop grade items I plan to use (like GPU). Or maybe I am just missing something :).
Getting adapters for the connectors seems easier than modding the case to take an ATX PSU.

Is there any way to know how much performance would be lost on the second M.2? Does this mean if the 10Gbe NICs are being hammered, the second M.2 would essentially bottleneck to a crawl?
Assuming you hammer the M.2 AND 10G networking at the same time, the X550 would take up to 4 PCIe3.0 lanes, and leave as much for the M.2 and the rest of chipset peripherals—not quite a crawl.

I am definitely not considering RAIDZ and if the data were in more of a production environment RAIDZ3 would be under consideration. Vdevs are new to me so getting that breakout right is the concern. Sure you lose 2 drives with hardware RAID6 but it is essentially 1 vdev (since they don't have vdevs) so when you expand, you still are only losing 2 drives for the entire array. With this 6x18TB Trunas setup, I would lose 2 drives per 6 drive vdev right?
Correct. But I doubt that a RAID setup would have 24 drivres in a single RAID6; it would rather be a nested RAID60 setup, with two parity drives per RAID6 set, which is the equivalent of a ZFS stripe of RAIDZ2, and incurs the same loss of space to parity.
 

ksnell

Cadet
Joined
Feb 3, 2021
Messages
4
Getting adapters for the connectors seems easier than modding the case to take an ATX PSU.
I'll have to see what options I have with this one, but my 2U is very limited. This is what I have on the way. The entire rear backplanes/caddys/bays will be removed and I'll make a new rear window out of 14 gauge steel. It has the SAS2-846EL2 backplane in it which is what I cared about, but it is crazy how much cheaper these are than the Supermicro 24 bay chassis now. It seems everyone had the same idea and bought them up since I last looked maybe 5 years ago!

Assuming you hammer the M.2 AND 10G networking at the same time, the X550 would take up to 4 PCIe3.0 lanes, and leave as much for the M.2 and the rest of chipset peripherals—not quite a crawl.
Puts me at ease a little. Highly doubt I'll be able to saturate 2x10Gbe at home for any length of time.

Correct. But I doubt that a RAID setup would have 24 drivres in a single RAID6; it would rather be a nested RAID60 setup, with two parity drives per RAID6 set, which is the equivalent of a ZFS stripe of RAIDZ2, and incurs the same loss of space to parity.
Very valid point.
 
Top