Build report: Ryzen Mini-ITX all-flash NAS build

directhex

Dabbler
Joined
Aug 29, 2023
Messages
15

THE STATUS QUO​

Back in 2015, I bought an off-the-shelf NAS, a QNAP TS-453mini, to act as my file store and Plex server. I had previously owned a Synology box, and whilst I liked the Synology OS and experience, the hardware was underwhelming. I loaded up the successor QNAP with four 5TB drives in RAID10, and moved all my files over (after some initial DoA drive issues were handled).

QNAP TS-453mini product photo


That thing has been in service for about 8 years now, and it’s been… a mixed bag. It was definitely more powerful than the predecessor system, but it was clear that QNAP’s OS was not up to the same standard as Synology’s – perhaps best exemplified by “HappyGet 2”, the QNAP webapp for downloading videos from streaming services like YouTube, whose icon is a straight rip-off of StarCraft 2. On its own, meaningless – but a bad omen for overall software quality

The logo for QNAP HappyGet 2 and Blizzard's Starcraft 2 side by side


Additionally, the embedded Celeron processor in the NAS turned out to be an issue for some cases. It turns out, when playing back videos with subtitles, most Plex clients do not support subtitles properly – instead they rely on the Plex server doing JIT transcoding to bake the subtitles directly into the video stream. I discovered this with some Blu-Ray rips of Game of Thrones – some episodes would play back fine on my smart TV, but episodes with subtitled Dothraki speech would play at only 2 or 3 frames per second.

The final straw was a ransomware attack, which went through all my data and locked every file below a 60MiB threshold. Practically all my music gone. A substantial collection of downloaded files, all gone. Some of these files had been carried around since my college days – digital rarities, or at least digital detritus I felt a real sense of loss at having to replace. This episode was caused by a ransomware targeting specific vulnerabilities in the QNAP OS, not an error on my part.

So, I decided to start planning a replacement with:
  • A non-garbage OS, whilst still being a NAS-appliance type offering (not an off-the-shelf Linux server distro)
  • Full remote management capabilities
  • A small form factor comparable to off-the-shelf NAS
  • A powerful modern CPU capable of transcoding high resolution video
  • All flash storage, no spinning rust
At the time, no consumer NAS offered everything (The Asustor FS6712X exists now, but didn’t when this project started), so I opted to go for a full DIY rather than an appliance – not the first time I’ve jumped between appliances and DIY for home storage.

SELECTING THE CORE OF THE SYSTEM​

There aren’t many companies which will sell you a small motherboard with IPMI. Supermicro is a bust, so is Tyan. But ASRock Rack, the server division of third-tier motherboard vendor ASRock, delivers. Most of their boards aren’t actually compliant Mini-ITX size, they’re a proprietary “Deep Mini-ITX” with the regular screw holes, but 40mm of extra length (and a commensurately small list of compatible cases). But, thankfully, they do have a tiny selection of boards without the extra size, and I stumbled onto the X570D4I-2T, a board with an AMD AM4 socket and the mature X570 chipset. This board can use any AMD Ryzen chip (before the latest-gen Ryzen 7000 series); has built in dual 10 gigabit ethernet; IPMI; four (laptop-sized) RAM slots with full ECC support; one M.2 slot for NVMe SSD storage; a PCIe 16x slot (generally for graphics cards, but we live in a world of possibilities); and up to 8 SATA drives OR a couple more NVMe SSDs. It’s astonishingly well featured, just a shame it costs about $450 compared to a good consumer-grade Mini ITX AM4 board costing less than half that.

I was so impressed with the offering, in fact, that I crowed about it on Mastodon and ended up securing ASRock another sale, with someone else looking into a very similar project to mine around the same timespan.

The next question was the CPU. An important feature of a system expected to run 24/7 is low power, and AM4 chips can consume as much as 130W under load, out of the box. At the other end, some models can require as little as 35W under load – the OEM-only “GE” suffix chips, which are readily found for import on eBay. In their “PRO” variant, they also support ECC (all non-G Ryzen chips support ECC, but only Pro G chips do). The top of the range 8 core Ryzen 7 PRO 5750GE is prohibitively expensive, but the slightly weaker 6 core Ryzen 5 PRO 5650GE was affordable, and one arrived quickly from Hong Kong. Supplemented with a couple of cheap 16 GiB SODIMM sticks of DDR4 PC-3200 direct from Micron for under $50 a piece, that left only cooling as an unsolved problem to get a bootable test system.

The official support list for the X570D4I-2T only includes two rackmount coolers, both expensive and hard to source. The reason for such a small list is the non standard cooling layout of the board – instead of an AM4 hole pattern with the standard plastic AM4 retaining clips, it has an Intel 115x hole pattern with a non-standard backplate (Intel 115x boards have no backplate, the stock Intel 115x cooler attaches to the holes with push pins). As such every single cooler compatibility list excludes this motherboard. However, the backplate is only secured with a mild glue – with minimal pressure and a plastic prying tool it can be removed, giving compatibility with any 115x cooler (which is basically any CPU cooler for more than a decade). I picked an oversized low profile Thermalright AXP120-X67 hoping that its 120mm fan would cool the nearby MOSFETs and X570 chipset too.

img_7819-1024x768.jpg

TESTING UP TO THIS POINT​

Using a spare ATX power supply, I had enough of a system built to explore the IPMI and UEFI instances, and run MemTest86 to validate my progress. The memory test ran without a hitch and confirmed the ECC was working, although it also showed that the memory was only running at 2933 MT/s instead of the rated 3200 MT/s (a limit imposed by the motherboard, as higher speeds are considered overclocking). The IPMI interface isn’t the best I’ve ever used by a long shot, but it’s minimum viable and allowed me to configure the basics and boot from media entirely via a Web browser.

image-5.png


One sad discovery, however, which I’ve never seen documented before, on PCIe bifurcation.

With PCI Express, you have a number of “lanes” which are allocated in groups by the motherboard and CPU manufacturer. For Ryzen prior to Ryzen 7000, that’s 16 lanes in one slot for the graphics card; 4 lanes in one M.2 connector for an SSD; then 4 lanes connecting the CPU to the chipset, which can offer whatever it likes for peripherals or extra lanes (bottlenecked by that shared 4x link to the CPU, if it comes down to it).

It’s possible, with motherboard and CPU support, to split PCIe groups up – for example an 8x slot could be split into two 4x slots (eg allowing two NVMe drives in an adapter card – NVME drives these days all use 4x). However with a “Cezanne” Ryzen with integrated graphics, the 16x graphics card slot cannot be split into four 4x slots (ie used for for NVMe drives) – the most bifurcation it allows is 8x4x4x, which is useless in a NAS.

image-4.png


As such, I had to abandon any ideas of an all-NVMe NAS I was considering: the 16x slot split into four 4x, combined with two 4x connectors fed by the X570 chipset, to a total of 6 NVMe drives. 7.6TB U.2 enterprise disks are remarkably affordable (cheaper than consumer SATA 8TB drives), but alas, I was locked out by my 5650GE. Thankfully I found out before spending hundreds on a U.2 hot swap bay. The NVMe setup would be nearly 10x as fast as SATA SSDs, but at least the SATA SSD route would still outperform any spinning rust choice on the market (including the fastest 10K RPM SAS drives)

CONTAINING THE CORE​

The next step was to pick a case and power supply. A lot of NAS cases require an SFX (rather than ATX) size supply, so I ordered a modular SX500 unit from Silverstone. Even if I ended up with a case requiring ATX, it’s easy to turn an SFX power supply into ATX, and the worst result is you have less space taken up in your case, hardly the worst problem to have.

That said, on to picking a case. There’s only one brand with any cachet making ITX NAS cases, Silverstone. They have three choices in an appropriate size: CS01-HS, CS280, and DS380. The problem is, these cases are all badly designed garbage. Take the CS280 as an example, the case with the most space for a CPU cooler. Here’s how close together the hotswap bay (right) and power supply (left) are:

image-6.png

Internal image of Silverstone CS280 NAS build. Image stolen from ServeTheHome

With actual cables connected, the cable clearance problem is even worse:

image-7.png

Internal image of Silverstone CS280 NAS build. Image stolen from ServeTheHome

Remember, this is the best of the three cases for internal layout, the one with the least restriction on CPU cooler height. And it’s garbage! Total hot garbage! I decided therefore to completely skip the NAS case market, and instead purchase a 5.25″-to-2.5″ hot swap bay adapter from Icy Dock, and put it in an ITX gamer case with a 5.25″ bay. This is no longer a served market – 5.25″ bays are extinct since nobody uses CD/DVD drives anymore. The ones on the market are really new old stock from 2014-2017: The Fractal Design Core 500, Cooler Master Elite 130, and Silverstone SUGO 14. Of the three, the Fractal is the best rated so I opted to get that one – however it seems the global supply of “new old stock” fully dried up in the two weeks between me making a decision and placing an order – leaving only the Silverstone case.

Icy Dock have a selection of 8-bay 2.5″ SATA 5.25″ hot swap chassis choices in their ToughArmor MB998 series. I opted for the ToughArmor MB998IP-B, to reduce cable clutter – it requires only two SFF-8611-to-SF-8643 cables from the motherboard to serve all eight bays, which should make airflow less of a mess. The X570D4I-2T doesn’t have any SATA ports on board, instead it has two SFF-8611 OCuLink ports, each supporting 4 PCI Express lanes OR 4 SATA connectors via a breakout cable. I had hoped to get the ToughArmor MB118VP-B and run six U.2 drives, but as I said, the PCIe bifurcation issue with Ryzen “G” chips meant I wouldn’t be able to run all six bays successfully.

img_7941-1024x768.jpg


img_7942-768x1024.jpg

ACTUAL STORAGE FOR THE STORAGE SERVER​

My concept for the system always involved a fast boot/cache drive in the motherboard’s M.2 slot, non-redundant (just backups of the config if the worst were to happen) and separate storage drives somewhere between 3.8 and 8 TB each (somewhere from $200-$350). As a boot drive, I selected the Intel Optane SSD P1600X 58G, available for under $35 and rated for 228 years between failures (or 11,000 complete drive rewrite cycles).

So, on to the big expensive choice: storage drives. I narrowed it down to two contenders: new-old-stock Intel D3-S4510 3.84TB enterprise drives, at about $200, or Samsung 870 QVO 8TB consumer drives, at about $375. I did spend a long time agonizing over the specification differences, the ZFS usage reports, the expected lifetime endurance figures, but in reality, it came down to price – $1600 of expensive drives vs $3200 of even more expensive drives. That’s 27TB of usable capacity in RAID-Z1, or 23TB in RAID-Z2. For comparison, I’m using about 5TB of the old NAS, so that’s a LOT of overhead for expansion.

img_7945-1024x768.jpg

BOOTING UP​

Bringing it all together is the OS. I wanted an “appliance” NAS OS rather than self-administering a Linux distribution, and after looking into the surrounding ecosystems, decided on TrueNAS Scale (the beta of the 2023 release, based on Debian 12).

image-1024x600.png


I set up RAID-Z1, and with zero tuning (other than enabling auto-TRIM), got the following performance numbers:

IOPSBandwidth
4k random writes19.3k75.6 MiB/s
4k random reads36.1k141 MiB/s
Sequential writes2300 MiB/s
Sequential reads3800 MiB/s
Results using fio parameters suggested by Huawei

And for comparison, the maximum theoretical numbers quoted by Intel for a single drive:

IOPSBandwidth
4k random writes16k?
4k random reads90k?
Sequential writes280 MiB/s
Sequential reads560 MiB/s
Numbers quoted by Intel SSD successors Solidigm.

Finally, the numbers reported on the old NAS with four 7200 RPM hard disks in RAID 10:

IOPSBandwidth
4k random writes4301.7 MiB/s
4k random reads800632 MiB/s
Sequential writes311 MiB/s
Sequential reads566 MiB/s

Performance seems pretty OK. There’s always going to be an overhead to RAID. I’ll settle for the 45x improvement on random writes vs. its predecessor, and 4.5x improvement on random reads. The sequential write numbers are gonna be impacted by the size of the ZFS cache (50% of RAM, so 16 GiB), but the rest should be a reasonable indication of true performance.

It took me a little while to fully understand the TrueNAS permissions model, but I finally got Plex configured to access data from the same place as my SMB shares, which have anonymous read-only access or authenticated write access for myself and my wife, working fine via both Linux and Windows.

And… that’s it! I built a NAS. I intend to add some fans and more RAM, but that’s the build. Total spent: about $3000, which sounds like an unreasonable amount, but it’s actually less than a comparable Synology DiskStation DS1823xs+ which has 4 cores instead of 6, first-generation AMD Zen instead of Zen 3, 8 GiB RAM instead of 32 GiB, no hardware-accelerated video transcoding, etc. And it would have been a whole lot less fun!

img_7958-768x1024.jpg


(Reposted from my blog and PCPartPicker)
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Don't forget to enable snapshots, at least weekly with a retention of a month or 2, for all your datasets. This would be your ransomware mitigation, which allows rolling back to un-encrypted files when ransomware attacks. Ransomware can still attack a TrueNAS server, through the client's R/W access via SMB. So if your desktop gets infected, it may infect your Samba shares and its files.

Of course, you may know this already, but as some say, if it's not said, it did not happen.

Good luck.
 

directhex

Dabbler
Joined
Aug 29, 2023
Messages
15
Don't forget to enable snapshots, at least weekly with a retention of a month or 2, for all your datasets. This would be your ransomware mitigation, which allows rolling back to un-encrypted files when ransomware attacks. Ransomware can still attack a TrueNAS server, through the client's R/W access via SMB. So if your desktop gets infected, it may infect your Samba shares and its files.

Of course, you may know this already, but as some say, if it's not said, it did not happen.

Good luck.

Forgot about setting up snapshots. Can't hurt, it's not like it takes up any extra space as long as things aren't going horribly wrong.
 

directhex

Dabbler
Joined
Aug 29, 2023
Messages
15
Bold of you to have 8 drives in Z1.
I thought about the pros and cons, and decided the likelihood of mid-rebuild drive failure with SSD is pretty much entirely unrelated to the risk of mid-rebuild failure with spinning rust. 8 drives in Z1 (i.e. 5) would be entirely doomed with spinning rust.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Guess I should add a disaster recovery drive. Usb attached spinning rust, or something
Here is something that maybe useful. Using a locally attached disk for backups, even USB.

This warns against USB attached data drives. BUT, I have found temporarily connected USB drives for backup purposes work reasonably well. (If they have enough cooling... cheap USB enclosures are, well, designed cheaply. And tend to over-heat.)
 

Patrick_3000

Contributor
Joined
Apr 28, 2021
Messages
167
I have two SCALE builds, a primary and backup, each with a similar board to yours: the ASRock Rack x570d4U-2L2T. It's similar to the x570d4I-2T but in micro ATX rather than Mini-ITX format. I built them around a year ago, and the board has worked for me. One of mine has the Ryzen 7 Pro 5750G installed, and the other has the 5650G . The only reason to go for the 5750G over the less expensive 5650G in SCALE that I can see is if you'll be running VMs and want the extra cores and threads to allocate, and it doesn't sound like that's your situation. Realistically, they're very similar CPUs.

I agree: IPMI and ECC support are nice to find in this price range.

To the best of my knowledge, it's not just Cezanne but all Ryzen's with integrated graphics prior to the latest generation that have that bifurcation limitation, which I only learned after I built. The most they can bifurcate is 8x4x4. It's an unfortunate quirk, but if you need a 4th SSD on a card, then I believe you can work around it by ordering a card with a bridge chip for just over $100 from some international suppliers.

I agree with what others have said: set up snapshots, and the more the merrier. They're rarely needed, but when they are, they come in handy.
 
Last edited:

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Congratulations on the great write up!

One sad discovery, however, which I’ve never seen documented before, on PCIe bifurcation.

With PCI Express, you have a number of “lanes” which are allocated in groups by the motherboard and CPU manufacturer. For Ryzen prior to Ryzen 7000, that’s 16 lanes in one slot for the graphics card; 4 lanes in one M.2 connector for an SSD; then 4 lanes connecting the CPU to the chipset, which can offer whatever it likes for peripherals or extra lanes (bottlenecked by that shared 4x link to the CPU, if it comes down to it).

It’s possible, with motherboard and CPU support, to split PCIe groups up – for example an 8x slot could be split into two 4x slots (eg allowing two NVMe drives in an adapter card – NVME drives these days all use 4x). However with a “Cezanne” Ryzen with integrated graphics, the 16x graphics card slot cannot be split into four 4x slots (ie used for for NVMe drives) – the most bifurcation it allows is 8x4x4x, which is useless in a NAS.
While it's nice that AMD has kept the AM4 socket for so long, the many accumulated generations make for a huge hot mess of capabilities which is not easy to sort out. It doesn't help that the numbering of CPUs does NOT match the underlying "painter" (architecture).

With that said, you're excessively harsh: x8x4x4 is not "useless", it lets you use three x4 drives rather than four drives from x4x4x4x4 bifurcation. That would be five U.2 drives in your NAS instead of the planned six.
Or you may forget about bifurcation and throw an extra $200-$300 at the problem with a PLX switch card, possibly even driving eight drives from a x16 slot.

Alternatively, Supermicro X10SDV rev. 2 boards in mini-ITX size make nice low power NAS with IPMI and their x16 slot can bifurcate all the way to x4x4x4x4.

That said, on to picking a case. There’s only one brand with any cachet making ITX NAS cases, Silverstone. They have three choices in an appropriate size: CS01-HS, CS280, and DS380. The problem is, these cases are all badly designed garbage.
I cannot concur more about the design… :grin:

And… that’s it! I built a NAS. I intend to add some fans and more RAM, but that’s the build. Total spent: about $3000, which sounds like an unreasonable amount, but it’s actually less than a comparable Synology DiskStation DS1823xs+ which has 4 cores instead of 6, first-generation AMD Zen instead of Zen 3, 8 GiB RAM instead of 32 GiB, no hardware-accelerated video transcoding, etc. And it would have been a whole lot less fun!
:cool:

Bold of you to have 8 drives in Z1.
Not as much as it would be with 4 TB HDDs… SDDs typically have an URE rate of 1e-17 rather than 1e-14 or 1e-15 so it will take petabyte-size drives for the classical argument "RAID5 is dead" to play out with SSDs. The risk with SSDs in raidz1 is mostly that of a second drive failing during the (much faster and shorter) resilver—but, as SSDs reportedly tend to totally fail with little or no warning, that risk may be more significant with SSDs than with HDDs.
 

diskdiddler

Wizard
Joined
Jul 9, 2014
Messages
2,377
Really appreciate this post, I speculated this being possible a long long long time ago and recall finding out there wasn't much in the way of SSD optomisation a long time back. I hope it's better now.

I also recall articles in excess of 15 years ago, predicting wildly huge SSDs by 2014 and we sure didn't see that.

I've recently upgraded mine to 60TB and based on my usage, I can't see me needing to upgrade for at least 8 years, hopefully by then I can at least get 60TB of SSDs for a reasonable price.
 

asap2go

Patron
Joined
Jun 11, 2023
Messages
228

THE STATUS QUO​

Back in 2015, I bought an off-the-shelf NAS, a QNAP TS-453mini, to act as my file store and Plex server. I had previously owned a Synology box, and whilst I liked the Synology OS and experience, the hardware was underwhelming. I loaded up the successor QNAP with four 5TB drives in RAID10, and moved all my files over (after some initial DoA drive issues were handled).

QNAP TS-453mini product photo


That thing has been in service for about 8 years now, and it’s been… a mixed bag. It was definitely more powerful than the predecessor system, but it was clear that QNAP’s OS was not up to the same standard as Synology’s – perhaps best exemplified by “HappyGet 2”, the QNAP webapp for downloading videos from streaming services like YouTube, whose icon is a straight rip-off of StarCraft 2. On its own, meaningless – but a bad omen for overall software quality

The logo for QNAP HappyGet 2 and Blizzard's Starcraft 2 side by side's Starcraft 2 side by side


Additionally, the embedded Celeron processor in the NAS turned out to be an issue for some cases. It turns out, when playing back videos with subtitles, most Plex clients do not support subtitles properly – instead they rely on the Plex server doing JIT transcoding to bake the subtitles directly into the video stream. I discovered this with some Blu-Ray rips of Game of Thrones – some episodes would play back fine on my smart TV, but episodes with subtitled Dothraki speech would play at only 2 or 3 frames per second.

The final straw was a ransomware attack, which went through all my data and locked every file below a 60MiB threshold. Practically all my music gone. A substantial collection of downloaded files, all gone. Some of these files had been carried around since my college days – digital rarities, or at least digital detritus I felt a real sense of loss at having to replace. This episode was caused by a ransomware targeting specific vulnerabilities in the QNAP OS, not an error on my part.

So, I decided to start planning a replacement with:
  • A non-garbage OS, whilst still being a NAS-appliance type offering (not an off-the-shelf Linux server distro)
  • Full remote management capabilities
  • A small form factor comparable to off-the-shelf NAS
  • A powerful modern CPU capable of transcoding high resolution video
  • All flash storage, no spinning rust
At the time, no consumer NAS offered everything (The Asustor FS6712X exists now, but didn’t when this project started), so I opted to go for a full DIY rather than an appliance – not the first time I’ve jumped between appliances and DIY for home storage.

SELECTING THE CORE OF THE SYSTEM​

There aren’t many companies which will sell you a small motherboard with IPMI. Supermicro is a bust, so is Tyan. But ASRock Rack, the server division of third-tier motherboard vendor ASRock, delivers. Most of their boards aren’t actually compliant Mini-ITX size, they’re a proprietary “Deep Mini-ITX” with the regular screw holes, but 40mm of extra length (and a commensurately small list of compatible cases). But, thankfully, they do have a tiny selection of boards without the extra size, and I stumbled onto the X570D4I-2T, a board with an AMD AM4 socket and the mature X570 chipset. This board can use any AMD Ryzen chip (before the latest-gen Ryzen 7000 series); has built in dual 10 gigabit ethernet; IPMI; four (laptop-sized) RAM slots with full ECC support; one M.2 slot for NVMe SSD storage; a PCIe 16x slot (generally for graphics cards, but we live in a world of possibilities); and up to 8 SATA drives OR a couple more NVMe SSDs. It’s astonishingly well featured, just a shame it costs about $450 compared to a good consumer-grade Mini ITX AM4 board costing less than half that.

I was so impressed with the offering, in fact, that I crowed about it on Mastodon and ended up securing ASRock another sale, with someone else looking into a very similar project to mine around the same timespan.

The next question was the CPU. An important feature of a system expected to run 24/7 is low power, and AM4 chips can consume as much as 130W under load, out of the box. At the other end, some models can require as little as 35W under load – the OEM-only “GE” suffix chips, which are readily found for import on eBay. In their “PRO” variant, they also support ECC (all non-G Ryzen chips support ECC, but only Pro G chips do). The top of the range 8 core Ryzen 7 PRO 5750GE is prohibitively expensive, but the slightly weaker 6 core Ryzen 5 PRO 5650GE was affordable, and one arrived quickly from Hong Kong. Supplemented with a couple of cheap 16 GiB SODIMM sticks of DDR4 PC-3200 direct from Micron for under $50 a piece, that left only cooling as an unsolved problem to get a bootable test system.

The official support list for the X570D4I-2T only includes two rackmount coolers, both expensive and hard to source. The reason for such a small list is the non standard cooling layout of the board – instead of an AM4 hole pattern with the standard plastic AM4 retaining clips, it has an Intel 115x hole pattern with a non-standard backplate (Intel 115x boards have no backplate, the stock Intel 115x cooler attaches to the holes with push pins). As such every single cooler compatibility list excludes this motherboard. However, the backplate is only secured with a mild glue – with minimal pressure and a plastic prying tool it can be removed, giving compatibility with any 115x cooler (which is basically any CPU cooler for more than a decade). I picked an oversized low profile Thermalright AXP120-X67 hoping that its 120mm fan would cool the nearby MOSFETs and X570 chipset too.

img_7819-1024x768.jpg

TESTING UP TO THIS POINT​

Using a spare ATX power supply, I had enough of a system built to explore the IPMI and UEFI instances, and run MemTest86 to validate my progress. The memory test ran without a hitch and confirmed the ECC was working, although it also showed that the memory was only running at 2933 MT/s instead of the rated 3200 MT/s (a limit imposed by the motherboard, as higher speeds are considered overclocking). The IPMI interface isn’t the best I’ve ever used by a long shot, but it’s minimum viable and allowed me to configure the basics and boot from media entirely via a Web browser.

image-5.png


One sad discovery, however, which I’ve never seen documented before, on PCIe bifurcation.

With PCI Express, you have a number of “lanes” which are allocated in groups by the motherboard and CPU manufacturer. For Ryzen prior to Ryzen 7000, that’s 16 lanes in one slot for the graphics card; 4 lanes in one M.2 connector for an SSD; then 4 lanes connecting the CPU to the chipset, which can offer whatever it likes for peripherals or extra lanes (bottlenecked by that shared 4x link to the CPU, if it comes down to it).

It’s possible, with motherboard and CPU support, to split PCIe groups up – for example an 8x slot could be split into two 4x slots (eg allowing two NVMe drives in an adapter card – NVME drives these days all use 4x). However with a “Cezanne” Ryzen with integrated graphics, the 16x graphics card slot cannot be split into four 4x slots (ie used for for NVMe drives) – the most bifurcation it allows is 8x4x4x, which is useless in a NAS.

image-4.png


As such, I had to abandon any ideas of an all-NVMe NAS I was considering: the 16x slot split into four 4x, combined with two 4x connectors fed by the X570 chipset, to a total of 6 NVMe drives. 7.6TB U.2 enterprise disks are remarkably affordable (cheaper than consumer SATA 8TB drives), but alas, I was locked out by my 5650GE. Thankfully I found out before spending hundreds on a U.2 hot swap bay. The NVMe setup would be nearly 10x as fast as SATA SSDs, but at least the SATA SSD route would still outperform any spinning rust choice on the market (including the fastest 10K RPM SAS drives)

CONTAINING THE CORE​

The next step was to pick a case and power supply. A lot of NAS cases require an SFX (rather than ATX) size supply, so I ordered a modular SX500 unit from Silverstone. Even if I ended up with a case requiring ATX, it’s easy to turn an SFX power supply into ATX, and the worst result is you have less space taken up in your case, hardly the worst problem to have.

That said, on to picking a case. There’s only one brand with any cachet making ITX NAS cases, Silverstone. They have three choices in an appropriate size: CS01-HS, CS280, and DS380. The problem is, these cases are all badly designed garbage. Take the CS280 as an example, the case with the most space for a CPU cooler. Here’s how close together the hotswap bay (right) and power supply (left) are:

image-6.png

Internal image of Silverstone CS280 NAS build. Image stolen from ServeTheHome

With actual cables connected, the cable clearance problem is even worse:

image-7.png

Internal image of Silverstone CS280 NAS build. Image stolen from ServeTheHome

Remember, this is the best of the three cases for internal layout, the one with the least restriction on CPU cooler height. And it’s garbage! Total hot garbage! I decided therefore to completely skip the NAS case market, and instead purchase a 5.25″-to-2.5″ hot swap bay adapter from Icy Dock, and put it in an ITX gamer case with a 5.25″ bay. This is no longer a served market – 5.25″ bays are extinct since nobody uses CD/DVD drives anymore. The ones on the market are really new old stock from 2014-2017: The Fractal Design Core 500, Cooler Master Elite 130, and Silverstone SUGO 14. Of the three, the Fractal is the best rated so I opted to get that one – however it seems the global supply of “new old stock” fully dried up in the two weeks between me making a decision and placing an order – leaving only the Silverstone case.

Icy Dock have a selection of 8-bay 2.5″ SATA 5.25″ hot swap chassis choices in their ToughArmor MB998 series. I opted for the ToughArmor MB998IP-B, to reduce cable clutter – it requires only two SFF-8611-to-SF-8643 cables from the motherboard to serve all eight bays, which should make airflow less of a mess. The X570D4I-2T doesn’t have any SATA ports on board, instead it has two SFF-8611 OCuLink ports, each supporting 4 PCI Express lanes OR 4 SATA connectors via a breakout cable. I had hoped to get the ToughArmor MB118VP-B and run six U.2 drives, but as I said, the PCIe bifurcation issue with Ryzen “G” chips meant I wouldn’t be able to run all six bays successfully.

img_7941-1024x768.jpg


img_7942-768x1024.jpg

ACTUAL STORAGE FOR THE STORAGE SERVER​

My concept for the system always involved a fast boot/cache drive in the motherboard’s M.2 slot, non-redundant (just backups of the config if the worst were to happen) and separate storage drives somewhere between 3.8 and 8 TB each (somewhere from $200-$350). As a boot drive, I selected the Intel Optane SSD P1600X 58G, available for under $35 and rated for 228 years between failures (or 11,000 complete drive rewrite cycles).

So, on to the big expensive choice: storage drives. I narrowed it down to two contenders: new-old-stock Intel D3-S4510 3.84TB enterprise drives, at about $200, or Samsung 870 QVO 8TB consumer drives, at about $375. I did spend a long time agonizing over the specification differences, the ZFS usage reports, the expected lifetime endurance figures, but in reality, it came down to price – $1600 of expensive drives vs $3200 of even more expensive drives. That’s 27TB of usable capacity in RAID-Z1, or 23TB in RAID-Z2. For comparison, I’m using about 5TB of the old NAS, so that’s a LOT of overhead for expansion.

img_7945-1024x768.jpg

BOOTING UP​

Bringing it all together is the OS. I wanted an “appliance” NAS OS rather than self-administering a Linux distribution, and after looking into the surrounding ecosystems, decided on TrueNAS Scale (the beta of the 2023 release, based on Debian 12).

image-1024x600.png


I set up RAID-Z1, and with zero tuning (other than enabling auto-TRIM), got the following performance numbers:

IOPSBandwidth
4k random writes19.3k75.6 MiB/s
4k random reads36.1k141 MiB/s
Sequential writes2300 MiB/s
Sequential reads3800 MiB/s
Results using fio parameters suggested by Huawei

And for comparison, the maximum theoretical numbers quoted by Intel for a single drive:

IOPSBandwidth
4k random writes16k?
4k random reads90k?
Sequential writes280 MiB/s
Sequential reads560 MiB/s
Numbers quoted by Intel SSD successors Solidigm.

Finally, the numbers reported on the old NAS with four 7200 RPM hard disks in RAID 10:

IOPSBandwidth
4k random writes4301.7 MiB/s
4k random reads800632 MiB/s
Sequential writes311 MiB/s
Sequential reads566 MiB/s

Performance seems pretty OK. There’s always going to be an overhead to RAID. I’ll settle for the 45x improvement on random writes vs. its predecessor, and 4.5x improvement on random reads. The sequential write numbers are gonna be impacted by the size of the ZFS cache (50% of RAM, so 16 GiB), but the rest should be a reasonable indication of true performance.

It took me a little while to fully understand the TrueNAS permissions model, but I finally got Plex configured to access data from the same place as my SMB shares, which have anonymous read-only access or authenticated write access for myself and my wife, working fine via both Linux and Windows.

And… that’s it! I built a NAS. I intend to add some fans and more RAM, but that’s the build. Total spent: about $3000, which sounds like an unreasonable amount, but it’s actually less than a comparable Synology DiskStation DS1823xs+ which has 4 cores instead of 6, first-generation AMD Zen instead of Zen 3, 8 GiB RAM instead of 32 GiB, no hardware-accelerated video transcoding, etc. And it would have been a whole lot less fun!

img_7958-768x1024.jpg


(Reposted from my blog and PCPartPicker)
I remember the Ryzen 5000 G series only having PCIe 3.0 support. Is that the same for the Pro models vor do they have 4.0? I also struggled choosing the right processor and board for bifurcation etc.
 

asap2go

Patron
Joined
Jun 11, 2023
Messages
228
Here is something that maybe useful. Using a locally attached disk for backups, even USB.

This warns against USB attached data drives. BUT, I have found temporarily connected USB drives for backup purposes work reasonably well. (If they have enough cooling... cheap USB enclosures are, well, designed cheaply. And tend to over-heat.)
I also do monthly backups via an external USB. Get's up to 56°C. But it works and SMART data says it's still in good condition.
 

directhex

Dabbler
Joined
Aug 29, 2023
Messages
15
I remember the Ryzen 5000 G series only having PCIe 3.0 support. Is that the same for the Pro models vor do they have 4.0? I also struggled choosing the right processor and board for bifurcation etc.
You're right, APUs are behind on a PCIe generation. So 24 lanes @ 3.0 (4 lanes for chipset interconnect)
 
Top