AMD Ryzen with ECC and 6x M.2 NVMe build

DigitalMinimalist

Contributor
Joined
Jul 24, 2022
Messages
162
So my current path is to update the config to use an "ASRock Rack X470D4U" and "AMD Ryzen 5 5600" because the X470D4U has onboard GPU and can run the 5600 headless as confirmed by support:
Good choice:
I run X470D4U with Ryzen 2700X Asus Hyper M.2 with 4xNVME and Intel X710-DA2.
NVME as Boot Drive too
 

pixelwave

Contributor
Joined
Jan 26, 2022
Messages
174
Good choice:
I run X470D4U with Ryzen 2700X Asus Hyper M.2 with 4xNVME and Intel X710-DA2.
NVME as Boot Drive too
Can you use all 6x NVMe slots simultaneously? 4x from the x16 slot and 2x directly on the board ...?
 

DigitalMinimalist

Contributor
Joined
Jul 24, 2022
Messages
162
Can you use all 6x NVMe slots simultaneously? 4x from the x16 slot and 2x directly on the board ...?
Yes!

The Hyper M.2 SSDs (4x Micron 7300 Pro 1TB) are running with full speed (PCIe 3.0 4x = 3500 MB/s and the Mainboard ones are a little bit slower with 2000MB/s
 
Last edited:

mervincm

Contributor
Joined
Mar 21, 2014
Messages
157
You might want to taper your expectations a bit for "game duty". No NAS/SAN can compete with local storage when it comes to latency. Don't be disappointed when a 39$ local SATA SSD in your gaming rig performs better on real world gaming tasks than your really amazing looking TrueNAS project.
 

pixelwave

Contributor
Joined
Jan 26, 2022
Messages
174
You might want to taper your expectations a bit for "game duty". No NAS/SAN can compete with local storage when it comes to latency. Don't be disappointed when a 39$ local SATA SSD in your gaming rig performs better on real world gaming tasks than your really amazing looking TrueNAS project.
Of course ... there is always lag and overhang involved on various points - nothing really beats local system storage.

That is also why I ditched the idea of a centralised game storage for my setup and went back to local 7GBs+ NVMe PCIe 4.0 drives for my gaming rig.

But there are certain use cases where centralised game storage with fast access speed (40GbE) comes in handy and I might revisit that topic later depending on the use.
 

Glowtape

Dabbler
Joined
Apr 8, 2017
Messages
45
It works well enough. Best strategy is to keep the Game-du-Jour on the local SSD, and then move it away onto the NAS for random casual play.
 

pixelwave

Contributor
Joined
Jan 26, 2022
Messages
174
It works well enough. Best strategy is to keep the Game-du-Jour on the local SSD, and then move it away onto the NAS for random casual play.
I would like to discuss the setup deeper …. how do you store gamefiles on your NAS and link them on the game machine? For example with Steam? Did you put the entiere game folder on network drive.. or do you just put non frequently games on network and copy them over when you want to use them again?
 

Glowtape

Dabbler
Joined
Apr 8, 2017
Messages
45
I have a ZVOL (which is a virtual block device) on my NAS, which the Windows machine gets access to via NVMe-oF (which is something newer and better than iSCSI, but latter works just as well). To Windows, it's for all intents and purposes just another SSD, that's formatted with NTFS.

When I'm done with the first playthrough of a game, and think I might spend some more time in it eventually, I tell Steam to move it from the Steam library on the local drive to the one on the ZVOL (you can define multiple Steam libraries in Steam and it has data mover functionality, if you weren't aware). Sometimes, when I know the games are not that sensitive to disk performance, I just install them right away to the ZVOL. Similar applies to other game stores/launchers.

Some disk-heavy games, say for instance Cyberpunk 2077 (which just keeps streaming like an idiot, keeping its RAM footprint at ~4GB and not using the rest of it), take a little longer to launch on the first, maybe second time, until the ARC and L2ARC are hot.

I went with a huge two-way mirror, instead of RAID-Z, for the hard disk backend. For running games, you want multiple independent spindles to reduce IO latency, when it needs to hit the disks. I was considering going three-way, but couldn't justify it so far on the monetary side, and the L2ARC does decently so far.

(Some other minor details, I went with 16KB ZVOL block size and NTFS clusters for performance reasons. To keep L2ARC memory usage on block headers down and allow ZStd to do its thing better. Some games don't seem to compress much of their assets.)
 

pixelwave

Contributor
Joined
Jan 26, 2022
Messages
174
Awesome, everything is detected and my new TrueNAS Scale is up and running! :)

Screenshot 2022-11-11 104834.jpg


Bildschirm­foto 2022-11-11 um 14.49.51.png
 

pixelwave

Contributor
Joined
Jan 26, 2022
Messages
174
UPDATE

So I switched from my choice of components (ASRock X570M Pro4 + AMD Ryzen 5 PRO 5650G) to ASROCK Racks support recommendations. Purchased and set everything up:

CPU: AMD Ryzen 5 5600 6C/12T
Mainboard: ASRock X470D4U
PCIe Card I: ASUS Hyper M.2 X16 Gen 4 (4x4x4x4 bifuraction)
PCIe Card II: PCIe 3.0 x4 Adapter

Onboard M.2_1: 128GB Intel SSD 760p M.2 NVMe
Onboard M.2_2: 128GB Intel SSD 760p M.2 NVMe
PCIe SLOT 6: 3x 1TB WD Red SN700 M.2 NVMe

As ASRock also tested and verified the Vermeer CPU does present the BIOS option to bifurcate to x4x4x4x4 which the Cezanne did not do (i guess because of the APU/integrated iGPU of the PRO CPU):

PCIe Settings.jpg


And all drives are detected by the BIOS (2x 128GB NVMe onboard and 3x 1TB NVMe on the PCIe Slot 6):

pcislot empty.jpg


So far so good and according to the manual I leave PCIe Slot 5 empty in that case. But now I also wanted to use the PCIe Slot 4 with four lanes which according to the layout should be possible:

2009567-l4.jpeg


The problem - when PCIe Slot 4 is occupied somehow the x16 slot is limited. In the BIOS only two NVMe drives from the Hypercard are now detected?

That behaviour is not illustrated either by the chipset diagram nor does the manual mention that I can not use PCIe SLOT 4 without limiting PCIe SLOT 6?

I also tested to remove both onboard NVMe (M.2_1 and M.2_2) that I installed but that did not matter. In that case SLOT 4 had a generic PCIe 3.0 x4 adapter installed and automatically SLOT 6 was limited to 8x lanes therefore only detecting 2x NVMe (the 950Pro NVMe is in SLOT4 via a PCIe 3.0 x4 adapter):

Extra PCIe 4x Card.jpg


So that is a pity I had to find out the hard way and another purchase gone wrong … -.-

When PCIe SLOT 6 (x16) is used in x4x4x4x4 mode - BOTH Slot 4 & 5 can not be used without limiting SLOT 6 again. :(
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
That behaviour is not illustrated either by the chipset diagram nor does the manual mention that I can not use PCIe SLOT 4 without limiting PCIe SLOT 6?
It is. "MUX" is a PCIe switch, not a PLX multiplexer (and the motherboard would come up 150-200E more expensive if it were…).
So from 16 CPU lanes you can have 8+8 lanes in slot 6, leaving nothing in slot 4 OR 8 lanes in each of slot 6 and slot 4. But not 16 in slot 6 and 8 in slot 4, that's where EPYC (or Threadripper) comes into play instead of desktop Ryzen.
 

Glowtape

Dabbler
Joined
Apr 8, 2017
Messages
45
This PCIe lane stuff is an absolute clusterfuck for prosumers. The more affordable CPUs are relatively anemic in lanes, largely due to the fact that the only expansion card people install is a single GPU. I'm currently having a hard time speccing a decent mainboard for a Zen4 for my new workstation, so that I still get enough bandwidth to my Mellanox card in it.

Threadrippers became unaffordable (for both prosumer and obviously NAS use). Assuming there's even one of the current generation.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
This PCIe lane stuff is an absolute clusterfuck for prosumers. The more affordable CPUs are relatively anemic in lanes,

This isn't anything new. Go back and look at an Intel E3-1230 CPU; it's got 20 PCIe lanes and 32GB max RAM. Sold for $230.

By comparison, in its intended dual CPU configuration, the E5-2630 pair had 80 PCIe lanes and 768GB max RAM, and sold for about $1250.

That's what it all was a decade ago. In the meantime, the growth in PCIe devices for NVMe and GPU's has somewhat shifted the way lanes are targeted inside mainboards, mostly out of practical necessity. Even if we go back TWENTY years ago, it was necessary to understand the architecture and data flow inside a mainboard's chipset for stuff like the ServerWorks boards. You really do need to do your homework before you buy to make sure that what you're buying is going to suit your needs. The more affordable CPUs are obviously targeted at the lower cost segment, small servers and lower end workstations.
 

pixelwave

Contributor
Joined
Jan 26, 2022
Messages
174
I did not really ment to sent that post … wierd that it got posted anyways.

I acutally made a mistake because in the X470 diagram and the mainboard layout SLOT 4 and 5 are switched.

So I put the Mellanox in the other slot now which I originally thought is connected with the Slot 6.

All NVMe drives are regonized correctly as well as my Mellanox Card … doing some further testing and will update soon if it is finally working ..
 

pixelwave

Contributor
Joined
Jan 26, 2022
Messages
174
It works!

So here is my final setup and build overview:

CPU: AMD Ryzen 5 5600 6C/12T (boxed cooler)
Mainboard: ASRock X470D4U (with onboard GPU and IPMI)
RAM: 64GB ECC Kingston Server Premier DDR4-3200 (2x32GB)
PCIe Card I: ASUS Hyper M.2 X16 Gen 4 (4x4 bifuraction)
PCIe Card II: 40GBe QSFP+ Mellanox Connect-X 3 (limited to PCIe 3.0 x4)
PSU: 350W SEASONIC SSP-350 GT
Case: SilverStone Temjin Evolution TJ08-E

Boot: 2x 128GB Intel SSD 760p (Mirror)
Data: 2x 12TB Western Digital Ultrastar DC HC520 (Mirror)
Docker & Apps: 3x 1TB WD Red SN700 (Raid-Z1)

Bildschirm­foto 2022-11-14 um 08.12.50.png
 

pixelwave

Contributor
Joined
Jan 26, 2022
Messages
174
All works so far and is recognised by the TrueNAS system. I have one spare NVMe SSD slot I can expand later with an additional 1TB to either have 2x Mirrors or a Raid-Z1 with 3TB of NVMe storage.

The Mellanox card "adventure" and my findings I will post in a separate Thread (link added later).

So after many hours of research this is currently (11/2022) one of the best "MAX NVMe" builds (supports 6x) I could find .. if you do not want to go with more expansive and power hungry Epyc / Xeon combinations.

I run idle with all components above installed and active (+IPMI, +1GBe, +40GBe) below 60W measured from the plug without any optimizations yet (BIOS / P-states / Undervolting / ..):

Bildschirmfoto 2022-11-14 um 08.19.41.png


Also going back to the original memory situation and recommendation .. I have still quite a large amount (54/64GB) of memory "free". Which makes me think if I really needed that much? Even when copying files all night .. there was not much change in the diagram:

Bildschirmfoto 2022-11-14 um 08.23.22.png
 
Last edited:

Glowtape

Dabbler
Joined
Apr 8, 2017
Messages
45
If you don't intend to use the RAM for VMs or containers, you should set up tunables for ARC max.

I have an init script (/root/tunables.sh) set up in System -> Advanced, giving ARC up to 52GB of my 64GB. Otherwise it'd cap at 50% of RAM:

Code:
#!/bin/bash

echo -n 16 > /sys/module/zfs/parameters/l2arc_headroom
echo -n 0 > /sys/module/zfs/parameters/l2arc_noprefetch
echo -n 55834574848 > /sys/module/zfs/parameters/zfs_arc_max

/sbin/swapoff -a
 

somewhatdamaged

Dabbler
Joined
Sep 5, 2015
Messages
49
I wouldn't personally run docker/kubernetes off spinning disks - SSD all the way imo
 
Top