AMD Ryzen with ECC and 6x M.2 NVMe build

pixelwave

Contributor
Joined
Jan 26, 2022
Messages
174
UPDATE - For the final BOM and build please skip to:

Read below in order to follow my entire journey starting with the initial build:

-----------

I am wrapping up my final hardware choice to update my TrueNAS Scale System.

I went through the official hardware recommendation list - but would like to go with an AMD system because of a mix of price, power consumption, performance, form-factor and current local availability.

Bildschirmfoto 2022-10-13 um 13.26.03.png


Additional components:
  • PSU: SEASONIC SSP-350 GT
  • CASE: SilverStone Temjin Evolution TJ08-E
  • CPU-FAN: Noctua NH-L9a-AM4 chromax.black

Storage setup:

Boot -> 2x Kingston FURY SSD 240GB (Mirror)
Fast Storage -> 3x WD Red SN700 1TB (RaidZ-1)
Archive -> 2x Western Digital Ultrastar DC HC520 12TB (Mirror)

Any issues or things I might forgot?

I know the boot mirror is overkill - but I still have those drives laying around for free. Also the case is already there and thats why I like to go with mATX.
 

Attachments

  • Bildschirmfoto 2022-10-13 um 13.20.30.png
    Bildschirmfoto 2022-10-13 um 13.20.30.png
    702 KB · Views: 388
Last edited:

ririzarry

Cadet
Joined
Jun 25, 2022
Messages
5
I am wrapping up my final hardware choice to update my TrueNAS Scale System.

I went through the official hardware recommendation list - but would like to go with an AMD system because of a mix of price, power consumption, performance, form-factor and current local availability.

View attachment 59097

Additional components:
  • PSU: SEASONIC SSP-350 GT
  • CASE: SilverStone Temjin Evolution TJ08-E
  • CPU-FAN: ?

Storage setup:

Boot -> 2x Kingston FURY SSD 240GB (Mirror)
Fast Storage -> 3x WD Red SN700 1TB (RaidZ-1)
Archive -> 2x Western Digital Ultrastar DC HC520 12TB (Mirror)

Any issues or things I might forgot?

I know the boot mirror is overkill - but I still have those drives laying around for free. Also the case is already there and thats why I like to go with mATX.
I'd go with more RAM. I have 32GB and between running a handful of Dockers and ARC eating up 1/2 the RAM, I'm left with less than 2GB free.
 

c77dk

Patron
Joined
Nov 27, 2019
Messages
468
FAN - look for Noctua.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Re "power consumption": Do not confuse "TDP" figures with actual power draw!

The recommended motherboard choice would be AsRock Rack X570D4U (or X470D4U, B550D4U): IPMI and best chance that ECC is actually implemented.
The boot mirror is indeed overkill. What if the "fast storage" intended for? If it is for VMs/apps, these would do best on a mirror.
 

pixelwave

Contributor
Joined
Jan 26, 2022
Messages
174
Re "power consumption": Do not confuse "TDP" figures with actual power draw!

The recommended motherboard choice would be AsRock Rack X570D4U (or X470D4U, B550D4U): IPMI and best chance that ECC is actually implemented.
The boot mirror is indeed overkill. What if the "fast storage" intended for? If it is for VMs/apps, these would do best on a mirror.
Yes TDP is understood ... but it helps if it has a "limiter". Especially since currently 5600X and this PRO one are more or less on price parity.

AsRock Rack X570D4U I did look at as well ... but comes at a +170% premium. The X570M Pro specs state specifically that ECC is supported for the AMD Pro Series CPU line. Another reason to potentially ditch the 5600X.

The "fast storage" is for apps, files and folders which are on daily access and/or sync. Currently no VMs planned. To achieve my requirement of 2TB of fast NVMe storage (incl. parity) I can either go with 2x2TB (400€) or 3x1TB (300€). I opted for the second option saving me some bucks hoping the difference will not be "dramatic".
 

pixelwave

Contributor
Joined
Jan 26, 2022
Messages
174

pixelwave

Contributor
Joined
Jan 26, 2022
Messages
174
I'd go with more RAM. I have 32GB and between running a handful of Dockers and ARC eating up 1/2 the RAM, I'm left with less than 2GB free.
Fair point ... I was not sure if 32GB are maybe enough? I currently run 6-8 docker containers which are a rather "light" load and the frequent accessed storage will be the 2TB NVMe drives. HDDs more for long term storage.

I kept with the loose rule of "8GB spare RAM and then 1GB for every TB." I have 14TB in total ... so I would end up with (8+14) 22GB RAM ... still some room to grow (32GB installed).
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Fair point ... I was not sure if 32GB are maybe enough? I currently run 6-8 docker containers which are a rather "light" load and the frequent accessed storage will be the 2TB NVMe drives. HDDs more for long term storage.

I kept with the loose rule of "8GB spare RAM and then 1GB for every TB." I have 14TB in total ... so I would end up with (8+14) 22GB RAM ... still some room to grow (32GB installed).

This is a generally healthy rule to follow, with caveats.

1) It's designed more for FreeBSD than for Linux (TrueNAS Scale), which has poor integration with ZFS and ARC memory management. In particular, unlike FreeNAS/TrueNAS Core, where the system will essentially use all free memory for ARC and then free it when the system is under memory pressure, Linux comes with a weedy static limit of using half the available memory for ARC.

2) This rule doesn't allow for consumption by jails, virtual machines, or containers. The requirements for these things needs to be added ON TOP of the calculated RAM.

If you are tight on funds, I would say that it is entirely fair to start at a lower (but reasonable) amount of RAM and then if need is demonstrated, upgrade later. If you do this, of course, please be aware that you might get less performance or not be able to run quite as much stuff. The best way to identify the proper amount of RAM for a ZFS system is to give it fifty percent more than you think it should need, and then bump it up or down as under-load observations suggest. The rules of thumb are there to get you into a territory that is likely to be "not terribly painful" but also "not horribly expensive".
 

somewhatdamaged

Dabbler
Joined
Sep 5, 2015
Messages
49
I've got the AsRock B450 and X570 boards, and ECC is supported and working (and shows inside TrueNAS). The APU (Ryzen Pro) is an excellent choice. Even the quad core one i have is brilliant. Go for 64GB ECC if you can though. Although to be fair, when i had 32GB it worked perfectly fine, and i have a fair amount of storage (exceeding the oft quoted 1GB per 1TB of storage..although i believe that was actually for dedupe?)

I don't think the AsRock Rack boards are necessary. My main server is absolutely rock solid, never crashes, and the board was fairly budget in the grand scheme of things - unless IPMI is essential for you (i know it's amazing though)

You don't need to fork out for a noctua cooler, unless you just really want one. Any budget cooler will do the job. Mine is using the standard AMD cooler, and its more than enough
 
Last edited:

pixelwave

Contributor
Joined
Jan 26, 2022
Messages
174
Ok ... I think I will go for 64GB then. Thanks for the reassurance that the Board / Pro CPU would work with ECC.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
exceeding the oft quoted 1GB per 1TB of storage..although i believe that was actually for dedupe?

Incorrect. It's just a loose rule of thumb to get people into the right sort of ratios for typical uses. ZFS can want to have an alarming amount of metadata handy in-ARC and many users are not prepared to provision it properly. As a result, their systems often end up running very slowly or poorly. Dedupe ratios are more in the range of 5GB RAM per TB of raw storage up to and beyond 25GB of RAM per TB, for small record sizes.
 

c77dk

Patron
Joined
Nov 27, 2019
Messages
468

pixelwave

Contributor
Joined
Jan 26, 2022
Messages
174
Now the only question left is regarding the „ASRock X570M Pro4“ PCIe Lane supply given an „AMD Ryzen 5 PRO 5650G, 6C/12T“ CPU installed.

The CPU has 24x PCIe 3.0 (16+4+4) and I plan to utilise the iGPU for server setup and occasional local maintenance. I did read they is maybe also important when it comes to available lane count.

Now on the motherboard I have the following PCIe components planned:

PCIE1 x16 (set via BIOS as x4x4x4x4*)
  • ASRock Hyper Quad M.2 Card with
    • M.2 NVMe SSD (PCIe 3.0 x4)
    • M.2 NVMe SSD (PCIe 3.0 x4)
    • M.2 NVMe SSD (PCIe 3.0 x4)
    • M.2 NVMe SSD (PCIe 3.0 x4)

PCIE3 x16 (@ x4 speed)
  • Mellanox ConnectX-3 Pro (PCIe 3.0 x8)

So both PCIe Slots account for a total of 20 lanes used coming from the CPU which in theory should be feasible according to the specification „…dual at x16 (PCIE1) / x4 (PCIE3)“. The Mellanox card also supports auto negotiate so should be fine to run electrically at x4 lanes in a mechanical x16 slot.

Now additionally I would like to use the M.2 slots on the motherboard:

M2_1 (@ x4 speed)
  • M.2 NVMe SSD (PCIe 3.0 x4)

M2_2 (@ x4 speed)
  • M.2 NVMe SSD (PCIe 3.0 x4)

… and also 2x SATA3 6.0 Gb/s connectors

SATA3_1 (@ 6.0 Gb/s)
  • 12 TB HDD (SATA3)

SATA3_2 (@ 6.0 Gb/s)
  • 12 TB HDD (SATA3)

Is that setup valid? Do I run out of PCIe lanes or are some throttled down? It would be ok if the 2x M.2 NVMe on the motherboard are not at full speed / with less lanes.

*x4x4x4x4 PCIe bifurcation was added with later BIOS update


I also posted this question directly to ASRock ... let's see if they can supply an answer ...
 

pixelwave

Contributor
Joined
Jan 26, 2022
Messages
174
EBD28B0A-B4D7-41AC-80E1-4E6348C15529.png

Ah ok … according to the I/O diagram the chipset actually has a quite sizeable amount of extra PCIe lanes… so it should actually work!
 

diogen

Explorer
Joined
Jul 21, 2022
Messages
72
Just curious, do you really need 40Gb/s network?

What duties are planned for this server?
 

somewhatdamaged

Dabbler
Joined
Sep 5, 2015
Messages
49
Staggering amount of nVME! Would 2 x 2TB not be enough mirrored? Nice system though!
 

pixelwave

Contributor
Joined
Jan 26, 2022
Messages
174
Just curious, do you really need 40Gb/s network?

What duties are planned for this server?
Not really .. but the Mellanox Connect-X 3 cards are used comparebly cheap and also downwards compatible with 10Gb/s.
 

NickF

Guru
Joined
Jun 12, 2014
Messages
763
Just as an FYI, NVME performance probably isn't going to "scale" as high as you might expect. Especially if you are trying to use 40GBe to access a SMB share on those drives. But otherwise, I think this is a good design and should treat you well.

Just for your reference:
fio --bs=128k --direct=1 --directory=/mnt/newprod/ --gtod_reduce=1 --ioengine=posixaio --iodepth=32 --group_reporting --name=randrw --numjobs=16 --ramp_time=10 --runtime=60 --rw=randrw --size=256M --time_based
5ec3d3fc42c0354d2cad33154aaa4bc7243f73ee.png
 

pixelwave

Contributor
Joined
Jan 26, 2022
Messages
174
An update regarding AMD AM4 Generation support.

So I initially tried the combination of an "ASRock X570M Pro4“ Motherboard with „AMD Ryzen 5 PRO 5650G (Cezanne)". Unfortunately after some troubleshooting and contact with ASRock support the AMD AM4 CPUs with iGPU/APU do not support x4x4x4x4 bifurcation on any ASRock motherboards:

I have tested it with BIOS 3.71, and it is not BIOS related, but CPU related.
I used a 4750G (Renoir) CPU, and have indeed the same options that you have, only a PCIE x8 slot
Then I retested with a 5900X (Vermeer) CPU, and then you have a PCIE x16 option, as you can see in the attached screenshot, these CPU's allow 4x4 Bifurcation
.

My next idea was to go with an ASRock Rack board but same issue:

ASRock Rack has their own support.
I had a chance to test this board, with a Cezanne CPU and a Vermeer
I made a screenshot with the Cezanne CPU, also this board, do not support 4x4, with this CPU, only x16, or x8 x8, or x8 x4 x4
With a Vermeer CPU, it can handle 4x4 as well, I also tested that
It is not board related, it is a limitation of the CPU.

So if I want to go with the ASUS HyperCard and 4x NVMe drives (x4x4x4x4) I have to use an AMD CPU without iGPU/APU.

So my current path is to update the config to use an "ASRock Rack X470D4U" and "AMD Ryzen 5 5600" because the X470D4U has onboard GPU and can run the 5600 headless as confirmed by support:

Yes, this board has onboard graphics, so you don't need a graphics card, or APU
Actually the term onboard is not correct, when it comes to Desktop boards, because there is no graphics chip onboard anymore.
But this board does have onboard graphics.
Bildschirm­foto 2022-11-01 um 09.39.31.png
 
Last edited:

Glowtape

Dabbler
Joined
Apr 8, 2017
Messages
45
Just curious, do you really need 40Gb/s network?

"Need" is always a relative term.

For instance, over here, my NAS is doing game drive duty. So ideally I'd like NVMe SSD speeds, because games. To do that, I have 64GB of RAM (ARC limit to 52GB so far), 384GB of L2ARC on a PCIe4 SSD (normal ZFS volumes set to metadata only L2ARC, ZVOLs for Steam to all), and a ConnectX3 at 40GBe in combination with NVMe-oF over RDMA (nvmet kernel module in TrueNAS, Starwind Initiator on the Windows box).

The ConnectX3 VPI dual port cards go for 60-70€ on Ebay currently. That's hobby territory. Also the practical limit, given that recent desktop hardware has at best a PCIe 4x slot for non-GPU cards, anyway.
 
Top