New FreeNAS build...going Ryzen

bsodmike

Dabbler
Joined
Sep 5, 2017
Messages
22
Hi all,

I thought I'd share my upcoming build with the forum here, based on a past thread of mine. I've gone Ryzen as I was able to confirm ECC compatibility via a couple Reddit threads, as well as the Crucial site for the chosen WS grade X570 mainboard from Asus.

Ideally, I would have preferred a X99/Xeon build from Intel, but it also doesn't make sense buying such old parts for a new build, and Xeon Scalable is really cost prohibitive.

I may also change the head unit to a model from Supermicro, which I have to source from a different supplier in Asia. At least with the Supermicro unit, it is a far more quality product and the backplane is confirmed as SAS3/12Gb/s, whereas with Norco it's hard to tell.

Build
  • Ryzen 9 3900X
  • Asus Pro WS X570-Ace mainboard
  • x2 Crucial 16GB UDIMM ECC CT16G4WFD8266 (Crucial QVL)
  • Noctua NH-U9S CPU cooler (Thanks to @noenken)
  • GPU: Using a spare Nvidia card.
Norco
  • 1x Product Model: IPC-4424 (4U Server Case with 24 Hot-Swappable SAS/SATA Drive Bay, MINI-SAS backplane) @ SGD 598
  • 1x Short Depth Sliding Rails (Sliding Rail for 1U to 4U rackmount server case) @ SGD 66
LSI HBA Card + 10G Nic
 
Joined
Oct 18, 2018
Messages
969
Hi @bsodmike. Welcome back to the forums. Looks like you put a lot of research into your build. From looking at this post and the one you linked I can't quite figure out your use case. It looks like you're looking to buy some extremely powerful gear in some places (the PCIe4.0 slots, the 10G NIC, and the SAS3 HBA) but some of the performance specs don't quite line up. I elaborate further below but the sense I get is that you may want to take a step back and think about what you want out of your hardware. How long do you want it to last? What do you want it to do? What is your budget? What kind of drives will you be using? How many concurrent users; etc.

  • Ryzen 9 3900X
This seems like quite the expensive CPU. What do you plan to use your NAS for?

  • Asus Pro WS X570-Ace mainboard
My guess is that you're likely paying too much for this board. It has some features you very likely don't need such as audio support and PCIe4.0. That board is quite expensive. Depending on your desired use-cases a used or more focused board will likely serve your needs perfectly fine for years to come.

I've gone Ryzen as I was able to confirm ECC compatibility via a couple Reddit threads, as well as the Crucial site for the chosen WS grade X570 mainboard from Asus.
I'm not able to confirm this; where are you seeing it? Sometimes you'll see products which can use ECC memory but do not use the ECC functionality.

:blink: that card is > $500 dollars. Whether you need this or not will depend on the drives you're using and the backplane. If you're not using a SAS3 backplane there isn't any point in getting a SAS3 card. Furthermore, if you're using spinning disks you might not need SAS3 either. Consider that 24 spinning disks sharing even 8 lanes of SAS2 leaves a theoretical bandwidth per disk of 250MB/s. This would basically be every single drive running at max. Of course, the math here is rough and not taking into account encoding etc; but you get the point.

If you need those 4 ports you could pick up this LSI-9207-8i with this expander for ~$150 bucks. They are both SAS2 and give you the total bandwidth of approximately the 250MB/s per disk as above and have 4 ports to plug into your backplane. This may be a much more cost effective approach for you which would free up funds for other parts of your build or other hobbies.

Also, minor nit, that card is a 12Gb/s not a 12 GB/s card. I realize that the site lists it as 12GB/s as well. I'm always annoyed and a little distrustful of sites that get that detail wrong. :)

Back to that motherboard as well; even the super fast 16-lane card you're looking at isn't using PCIe4.0.

  • 1x Product Model: IPC-4424 (4U Server Case with 24 Hot-Swappable SAS/SATA Drive Bay, MINI-SAS backplane) @ SGD 598
If I'm reading the correct product page, which I found here, that chassis uses as SAS2 backplane as evidenced by the 6Gb/s rating. I don't know anything about this company; though I'm not super impressed by the lack of specificity in their product descriptions.

You may also consider the Chelsio cards; they have excellent support in FreeNAS.

it also doesn't make sense buying such old parts for a new build, and Xeon Scalable is really cost prohibitive.
What do you plan to do with your build? I would suggest that rather than worry about the age of the hardware you worry about the capability of the hardware. Are you looking for just a file server? Or perhaps you want to stream multiple 4k streams via Plex? What is your budget? What you want to do will really inform what kind of board and cpu you'll need.
 

bsodmike

Dabbler
Joined
Sep 5, 2017
Messages
22
Hi Philio, thanks for your amazingly thoughtful response. I am dealing with a couple constraints that one living in the US wouldn't have to bother about -- especially, easy access to components.

Some background first, this was my first venture into FreeNAS back in 2017, which started off inside a Corsair case, some 3D printer HDD brackets and I managed to hold 8- WD Red drives together.

This build is basically -
  • Asus WS X99 mainboard
  • Xeon E5-2620 v4
  • 32GB of ECC Crucial RAM (verified that it is actually working as ECC)
  • Dedicated Nvidia GPU (just for monitor out)
  • 2x 9211-8i LSI Cards
  • 1x Intel X550-T2 nic.
Going through the flames of mordo.... I meant, flashing the firmware to IT mode lead me to setup a repo to help others:
https://github.com/bsodmike/s5clouds8-lsi9211-8i-IR-to-IT-EFI-bootable-usb

...and this setup has been great!

homelab_mike.jpg


I digress, back to those constraints -- since 2018 I pretty much order everything off Amazon US because these are shipped to Sri Lanka via DHL (as Duty Paid), and typically arrive at the door without much hassle.

There's not much scope in working with local retailers with such specificity and even if they could help, their margins + taxes would amount to about the same.

Fast forward to October 2019

Unfortunately, I can't source anything X99 on Amazon and after much hair pulling, looked at everything from X299, to Threadripper (X399), to even Xeon Scalable.

As per the guru @wendell at Level1Techs right now there's a Threadripper incompatibility with FreeNAS (although with time this may be resolved). Their commendation was to go for something Ryzen + Norco (since I'm already using a Norco chasis).

I'm actually shocked and dismayed that there is no ECC capable offering in the (current) market with 7-PCIe3.0 slots available. WHAT!!

Now to respond to some of your comments,

PhiloEpisteme said:
This seems like quite the expensive CPU. What do you plan to use your NAS for?

- When buying CPUs I try to 'future' proof 'em by getting something decent, so as to allow me to throw an entirely different workload at it if I end up repurposing it for a difference use-case.
- Running VMs in Bhyve, if I can get it to work again; Bhyve stopped running VMs that worked, after a previous FreeNAS upgrade; didn't have time to figure it out.
- I reluctantly settled on X570 due to its ECC support.
- However, this means I'm stuck with only 3x PCI-e slots; don't really care for PCIe4.0 though!

Slot 1: GPU
Slot 2: 10G Nic
Slot 3: LSI 9305-24i, simply because it has 6x SFF-8643 Mini-SAS HD connectors. I'd have to run 6x SFF-8087 cables, converting back to SAS2 on the Norco.

- However, I have contacted Supermicro about a confirmed SAS3 capable chassis, this one https://www.supermicro.com/en/products/chassis/4U/846/SC846BE2C-R1K23B

Will update here once I hear back re. cost and availability. I'll be in Singapore by the end of the month and plan to chuck it into the plane, like I did the Norco.

PhiloEpisteme said:
I'm not able to confirm this; where are you seeing it? Sometimes you'll see products which can use ECC memory but do not use the ECC functionality.

Crucial site confirms compatibility. This Reddit thread: https://old.reddit.com/r/ASUS/comments/cw74rl/asus_pro_ws_x570ace_ecc_compability/

Andandtech review as well: https://www.anandtech.com/show/14161/the-amd-x570-motherboard-overview/19

PhiloEpisteme said:
Consider that 24 spinning disks sharing even 8 lanes of SAS2 leaves a theoretical bandwidth per disk of 250MB/s

You arrived at that via 6Gb/s*8 = (48Gb/s)/8 = 6GB/s; 6/24 = 250MB/s right?

With the 9305-24i (at $500+ yikes!) 12Gb/s*24 lanes = 36GB/s / 24 = 1.5GB/s!! Yup, that's overkill for sure.

However, isn't it the only way to get 6x ports (of anything!) from a single PCIe slot? I suppose the solution is to go for a better head unit that's SAS3 compatible.

Thoughts?

PhiloEpisteme said:
If I'm reading the correct product page, which I found here, that chassis uses as SAS2 backplane as evidenced by the 6Gb/s rating. I don't know anything about this company; though I'm not super impressed by the lack of specificity in their product descriptions.

Yes, I too do not trust Norco (Singapore), they've emailed me saying it's SAS3 but that would mean 12Gb/s rating; plus it wouldn't have 6x SFF-8087 connectors right? It would have the Mini-SAS HD SFF-8643 connectors on the backplane?

PhiloEpisteme said:
If you need those 4 ports you could pick up this LSI-9207-8i with this expander for ~$150 bucks. They are both SAS2 and give you the total bandwidth of approximately the 250MB/s per disk as above and have 4 ports to plug into your backplane. This may be a much more cost effective approach for you which would free up funds for other parts of your build or other hobbies.

The crappy Norco needs 6x SFF-8087 connectors, 1x for 4 HDDs. With the LSI 9211-8i, wouldn't this mean 6GB/s / 4 HDDs = 1.5GB/s per HDD? Again overkill hmm.

As to your suggestion, eBay hasn't worked all too well for me, given the terrible postal system we have, and also again X570 means I'm limited to 1x PCIe slot for the HBA.
 
Joined
Oct 18, 2018
Messages
969
Hi Philio, thanks for your amazingly thoughtful response. I am dealing with a couple constraints that one living in the US wouldn't have to bother about -- especially, easy access to components.
100% understand this constraint. Definitely take my advice with a grain of salt.

Unfortunately, I can't source anything X99 on Amazon and after much hair pulling, looked at everything from X299, to Threadripper (X399), to even Xeon Scalable.
What about used hardware? I know a lot of people shy away from it; but it can really shine in these kinds of scenarios. If you're just using this as a backup fileserver for home use I bet a used X10 supermicro board will do great and possibly come in a bit cheaper?

When buying CPUs I try to 'future' proof 'em by getting something decent, so as to allow me to throw an entirely different workload at it if I end up repurposing it for a difference use-case.
I'm 100% with you here. What I've done as far as future proofing though is to assume that used server-grade cpus will be quite cheap in the future and that I can swap them out. Consider that if I buy a $250 used cpu today it will likely do everything I need and then some for years to come. And when it doesn't, 5 years down the road, I can pick up another at $250 and have all the performance I need. On the plus side, if I don't need to upgrade, it won't be wasted money.

- However, this means I'm stuck with only 3x PCI-e slots; don't really care for PCIe4.0 though!
Ah, this makes perfect sense. Yeah, those AMD chips support a TON of PCIe lanes.

Slot 1: GPU
Slot 2: 10G Nic
Slot 3: LSI 9305-24i, simply because it has 6x SFF-8643 Mini-SAS HD connectors. I'd have to run 6x SFF-8087 cables, converting back to SAS2 on the Norco.
Ah, this makes sense. So you're looking for a board with at least 1x PCIe 3.0x16 and 2x PCIe3.0x8? I assume you're hoping to passthrough that GPU or something? Or perhaps hope for future support using that card for transcoding?

You arrived at that via 6Gb/s*8 = (48Gb/s)/8 = 6GB/s; 6/24 = 250MB/s right?

With the 9305-24i (at $500+ yikes!) 12Gb/s*24 lanes = 36GB/s / 24 = 1.5GB/s!! Yup, that's overkill for sure.

However, isn't it the only way to get 6x ports (of anything!) from a single PCIe slot? I suppose the solution is to go for a better head unit that's SAS3 compatible.

Thoughts?
Ah, you're right, I ignored the need for 6 ports. :) Maybe you can get a supermicro chassis with a SAS2 expander backplane used off the web somewhere? If you really need those 6 ports though you can do the following
1xLSI9211-8i with this expander to give you 4 ports running at PCIe2.0x8 (~4GB/s) providing bandwidth to 16 drives for 250MB/s per drive and then add another LSI9211-8i for the last two plugs.

If you do this you'd need 1x PCIe3.0x16 + 4x PCIe3.0x8 (though the expander only uses it for power really). There are plenty of x10 boards with that kind of support. Even this route, the cards I mentioned will very likely come in cheaper used all together than the single card you mentioned.

Also, keep in mind your total theoretical bandwidth off the box. If you managed to saturate two 10G links you're still at 20Gb/s. Consider that 8 lanes of SAS2 can push 48Gb/s and you see that the network will likely be your bottleneck for most operations.

An expander backplane with fewer ports may be a good solution for you. This way you could power your entire backplane with 1 PCI slot and reduce the requirement for how many you need.

Yes, I too do not trust Norco (Singapore), they've emailed me saying it's SAS3 but that would mean 12Gb/s rating; plus it wouldn't have 6x SFF-8087 connectors right? It would have the Mini-SAS HD SFF-8643 connectors on the backplane?
I think the 12GB/s is the site being very bad about their specs. As for the connector, I'd double-check with them.

Bottom line though; I think the thing to figure out is whether you can save money long-term by going with SAS2 (and still have plenty of bandwidth) and used hardware that can be upgraded if you need.

Yes, that is a goal I have as well, and it'll do transcoding as well.
Check out Plex's passmark recommendations for transcoding 4k. That will give you an idea of what kind off CPU you need. Though, 4k transcoding is quite intensive and you may be better off trying to keep your media in a format that precludes the need to transcode.

Anyway, thanks for replying; I look forward to hearing your thoughts. I hope my questions don't come across the wrong way. I only mean them for clarification so I can better understand your use-case and requirements. Also, I totally get that your parts availability is different than mine. I kept posting ebay links here because I know nothing about what is available to you; I mostly intended it as a demonstration of the part. Though, the buyer I linked is a great buyer, I've bought several items off him and have communicated via message and he is extremely responsive. If you decide to buy from him he may consider shipping internationally via whatever method you trust the most.
 

bsodmike

Dabbler
Joined
Sep 5, 2017
Messages
22
Yeah, those AMD chips support a TON of PCIe lanes

It's a shame that mainboard manufacturers only care about
  • Gaming branding
  • RGB madness
  • M.2 everywhere.
Honestly, all audio circuitry should be removed from Mainboards, as anyone interested in decent audio would use an external DAC in any case. Sadly, they'll never listen to me so... :)

What about used hardware? I know a lot of people shy away from it

Well, I try to avoid those due to degraded lifespan by way of leaky capacitors, metal moving within transistors, PCB traces (metal migration). Also I wouldn't want a dead board killing the rest of my equipment either.

Thanks for your recommended source, without that I really wouldn't trust second hand parts (in general).

So you're looking for a board with at least 1x PCIe 3.0x16 and 2x PCIe3.0x8? I assume you're hoping to passthrough that GPU or something? Or perhaps hope for future support using that card for transcoding?

Well 4x slots PCIe3.0 with minimum 1-2x as PCie3.0x8 would have been nice. The GPU is there mainly because higher end CPUs lack APUs (in-chip graphics). So usually it'll be a really cheap card just for HDMI out.

Anyway, thanks for replying; I look forward to hearing your thoughts. I hope my questions don't come across the wrong way. I only mean them for clarification so I can better understand your use-case and requirements.

Not at all -- I'm glad you asked the hard questions, so thanks!!
 
Joined
Oct 18, 2018
Messages
969
It's a shame that mainboard manufacturers only care about
  • Gaming branding
  • RGB madness
  • M.2 everywhere.
Honestly, all audio circuitry should be removed from Mainboards, as anyone interested in decent audio would use an external DAC in any case. Sadly, they'll never listen to me so...
True, until you look at server boards, the offering is much nicer. I went with a new server board for my main server and a used one for my backup for this reason exactly. :)

Well, I try to avoid those due to degraded lifespan by way of leaky capacitors, metal moving within transistors, PCB traces (metal migration). Also I wouldn't want a dead board killing the rest of my equipment either.
Yeah, I get that. I would say that the mainboard is not likely to trash your pool though; zfs does a great job of keeping your data safe. :)

Well 4x slots PCIe3.0 with minimum 1-2x as PCie3.0x8 would have been nice. The GPU is there mainly because higher end CPUs lack APUs (in-chip graphics). So usually it'll be a really cheap card just for HDMI out.
I assume the HDMI out is for a VM or something? I went with boards that had built-in basic graphics for terminals for BIOS and terminal; the rest of FreeNAS I manage via the GUI. For example, a lot of supermicro boards have a Matrox G200 which give you 1 VGA out.

Anyway, looking forward to hearing the next iteration in your build.
 

bsodmike

Dabbler
Joined
Sep 5, 2017
Messages
22
Rather shocking response. Earlier he claimed this was a SAS3 12Gb/s backplane, yeesh!

Later on I realised that I should have mentioned that interface and protocol are two different things, although the interface needs to electrically handle the higher bandwidth, frequencies, currents.

However, reading up I find that SFF-8087 is indeed 12Gb/s compatible, via Mini-SAS HD is even 14Gb/s compatible as an interface.

With such rudeness, I'm now trying to reach Supermicro for help.

Screenshot 2019-10-16 at 13.09.08.png
 
Last edited:
Joined
Oct 18, 2018
Messages
969
Yikes. What terrible customer service. I've had good luck with the CSE-800 series chassis from supermicro. The only annoying thing is sometimes it is hard to tell the exact model number of the chassis and some vendors are slow or reluctant to post the exact backplane model number; typically in my experience this has been folks who are selling SAS1 backplanes. I got excited about a deal one time and bought a SAS1 backplane thinking it was SAS2; now I'm going to have to replace that backplane eventually; oops!
 

bsodmike

Dabbler
Joined
Sep 5, 2017
Messages
22
@PhiloEpisteme if I were to ditch the Ryzen route and go Supermicro/Xeon, what mainboard would you recommend to pair with:
This card would also suite the job, as the backplane has only 2x connectors. Do check my numbers below please,
  • 1x LSI 9305-16i (PCie3.0x8) with 2x SFF-8643 into HBA backplane
    • 12Gb/s*16 = (192Gb/s)/8 = 24GB/s; 12/24 = ~500MB/s per HDD (theoretical max)
    • PCIe3.0x8 (~7.88GB/s) ~> hence, 7.5/24 = ~300MB/s per HDD.
Thoughts?
 
Last edited:
Joined
Oct 18, 2018
Messages
969
I use that chassis with no problems, but I've got an X11SSM-F in there.

I do recommend you buy the chassis used. You can get used chassis that have fans, backplanes, dual PSUs, etc for MUCH less than new. I would make sure you have at least a SAS2 backplane. You can go SAS3 if you want with the SAS3846-EL or SAS3846-EL2 but the speed performance will likely only be seen by on-system processes between pools etc. Also, FWIW, the EL vs EL2 is whether the board supports multi-path and failover, the EL2 does the EL does not. Both support cascading. As far as I know, even the EL2 only uses 4 lanes for data transmission; double check with supermicro though.

1x LSI 9305-16i (PCie3.0x8) with 2x SFF-8643 into HBA backplane
If you get an expander backplane like the one you're talking about I'd get an 8i.

  • 12Gb/s*16 = (192Gb/s)/8 = 24GB/s; 12/24 = ~500MB/s per HDD (theoretical max)
  • PCIe3.0x8 (~7.88GB/s) ~> hence, 7.5/24 = ~300MB/s per HDD.
When you're thinking about bandwidth it is useful to think about your use cases etc. In general data is going to be moving in two ways; either within your system or over your network. This being a NAS much of the workload will be moving data over the network. If you look through your specs and try to identify the bottleneck in that setup you're going to find that the NIC is your bottleneck.

If you managed to saturate two 10G connections you're pushing 20Gb/s. That is a LOT of data. So, how do you do that?

Well, first you make sure that you don't have any PCIe bottlenecks getting data off the link and into your machine. Any PCIe3.0 x8 or x4 slot will have way more than enough bandwidth there.

Alright, so how about getting data to your backplane and the drives; again the HBA in an PCIe3.0x8 or x4 is way more than enough; so those SAS lanes are next. 4 lanes of SAS2 is 24Gb\s which should be enough. You wouldn't be blamed for going SAS3 here to make sure you have plenty of headroom, but I would suggest this only after you consider getting more ram (more on this later).

Also not that when we are calculating bandwidth per drive that assumes worst case scenario where every single drive is working as hard as it possibly can.

Now to the meat of my suggestion.

You have a powerful build up there and you seem to have a healthy budget to get a very versatile machine for your home use. What I think might be likely though is that you won't experience a lot of that power because of low ram and lack of SLOG devices. To get a sense of what I'm saying it is worth thinking about how ZFS works.

For reading data ZFS uses the ARC, a cache that is designed to hold as much data as possible such that on a request for data the machine doesn't has to go to your spinning disks to get it. To make this as fast as possible ZFS puts the ARC in the fastest storage it can find, your ram. Your use case will determine how big of an ARC you need but if you have a huge beast of a machine and not enough ram you'll get terrible performance. 32GB is a great start; and it may be all you need. But I would suggest that you keep some funds available to bump that to 64GB or 96GB by adding more 32GB modules if you're getting a low ARC hit ratio. This is something you can determine after your build is up and running so no need to panic and go out and buy it now; but you may want to make sure you have the funds available.

For sync writes (not async writes) ZFS has an additional area where hardware can make a huge difference. For sync writes ZFS gets data into ram from the client and then has to write it to permanent storage before reporting back that the data has been received. ZFS first pushes this to the ZIL in a transaction group and then gets more data from the client. After a short time the data is also written to the pool as well. The standard location for the ZIL is on your pool itself. This works fine for async writes, but for sync writes it means that your spinning disks are working twice as hard and can be a significant slow down. For this reason, if you're using a heavy sync write workflow you'll want to pick up SLOG devices, which is a dedicated ZIL device for the pool.

Both a SLOG device and more memory can be added later; but you probably want to make sure you have the funds ready if you need it. If necessary; scale back some on the other parts to make space in the budget because a too-small ARC or lack of SLOG in a heavy sync write workflow will kill your performance anyway.
 

bsodmike

Dabbler
Joined
Sep 5, 2017
Messages
22
Hi @PhiloEpisteme --

Ran into some unexpected trouble, but thankfully I had burnt in my 8x 10TB drives. My current pool of 8x4TB drives first experienced a 2-drive failure (Raidz2) and whilst I was resilvering one of the 10TB drives, all the remaining drives we reported as degraded and one as faulted.

Suffice to say the pool was dead, so I've re-created a new pool using the new 8x10TB drives and started `rsync` ing my data from a Synology box that has been standing in as my 'break-glass' "backup".

1. I can't seem to get rsync over SSH to copy binary files faster than 5MB/s and I've tried the tips here https://gist.github.com/KartikTalwar/4393116

2. Is there a Supermicro chassis you can recommend to allow me to setup a basic "PC" to run my VMs; I've already got the hardware running in a desktop case; I just want to relocate this into the rack.

So basically, it needs to be 4U, either support an ATX PC style PSU or preferably have 2x server-style hot-swap PSUs. In terms of storage it doesn't need to have many bays at all. May be space for 2x couple 3.5" drives/SSDs is plenty. More importantly though, support for ATX/E-ATX mainboards would be great, 5+ full-height PCI slots etc.

Thanks again!
 
Top