NVMe, chipsets, and Ryzen

Status
Not open for further replies.

hoankiem

Dabbler
Joined
Feb 16, 2018
Messages
10
I've been following FreeNAS for some time, but have been keeping my data on my workstation (Sandy Bridge, which was built back in 2010). I'm ready to make the jump to consolidating my data on a NAS because sadly my SB workstation is on its last legs and I'll be building a new workstation shortly, with the aim to keep the box as small as possible and store data on a NAS.

I'm an IT consultant by trade, so I comfortable around hardware and software, though my knowledge of FreeNAS certainly isn't at the level of you all.

Here's a preliminary build I put together:
https://pcpartpicker.com/list/mLDWGG

I have a few questions, but first I'll outline my goals:
  1. Keep costs as low as possible
  2. Transcoding (e.g. PLEX) needs will be none/minimum
  3. Little/no need to host jails (I have pfSense, rack with a dedicated virtualization server)
  4. 8 bays, preferably hot-swap though I can live without it
  5. NVMe M.2 or U.2 support would be really nice
  6. 64GB-128GB ECC memory support, though I will be starting with 32GB
  7. Ability to run Optane later is a big plus
Since the WD Easystore 8TB drives are readily on sale often, I'm planning to use shucked 8TB 256MB cache WD Red drives. It is possible that some might be "white label" Reds, which have the 3.3v pin feature. If the PSU or backpane doesn't support it I can easily put kapton tape over the offending SATA power pins to disable the feature. I'll probably start with the 4x shucked Reds I have on hand and expand later.

I really want to keep the FreeNAS box as small as possible, and have been infatuated with the U-NAS NSC-800 for a long while and would like to use that. I'll also likely be booting from USB flash drives.

Now my questions:
  1. Is the only viable option Intel, on the C236/C232 chipset?
  2. I noticed that even up to Coffee Lake, ARK says that the CPU supports ECC, but I can't find any motherboards that support it. It needs to be supported by a corresponding chipset right?
  3. QNAP and Synology have i3/i5/i7 NASes, but those seem to be using consumer chipsets, so ECC looks out of the question for those.
  4. Ryzen support still seems flakey. Is anyone running Ryzen with ECC without major issues?
  5. If I want to use U.2 or M.2 NVMe drives for tiered storage or caching, and choose a non-ECC chipset, is that a big no-no?
Thank you for your time!
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Hello,
Welcome to the forum.

Some of the things you are asking about don't make any sense for use with FreeNAS. For example:
Ability to run Optane later is a big plus
There is no reason you would want to do this, unless you plan to do something with the FreeNAS system that you did not tell about. Likewise
NVMe M.2 or U.2 support would be really nice
There is no reason for this, again, unless there is some plan to use the FreeNAS system in some way that you already specifically said that you are not planning to do.
If there is something you do want to use the system for, some things require special hardware to be installed, but not the kind you asked about, so if you say what your plans are, recommendations can be made as to what would actually work.

Here are some references about what hardware will work and how ZFS works

FreeNAS® Quick Hardware Guide
https://forums.freenas.org/index.php?resources/freenas®-quick-hardware-guide.7/

Hardware Recommendations Guide Rev 1e) 2017-05-06
https://forums.freenas.org/index.php?resources/hardware-recommendations-guide.12/

Slideshow explaining VDev, zpool, ZIL and L2ARC
https://forums.freenas.org/index.ph...ning-vdev-zpool-zil-and-l2arc-for-noobs.7775/
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
You mention having your pfSense box and VM host in a rack. Will your proposed FreeNAS box not fit there as well? There's a TON of off-lease Supermicro systems out there for quite cheap that make killer FreeNAS boxen.

You can add M.2 support with a PCIe card. Don't compromise on your motherboard trying to get a single M.2 slot!

Your RAM requirements are a challenge. The motherboard you've selected, per Asrock, only supports 32GB. Some of the latest E3s will support 64GB. Anything beyond that requires reaching into the E5 world. Again, off-lease refurb Supermicro systems reign here. That's what my system is (see my sig), and I was, fairly affordably, able to reach 192GB. Going beyond that requires 16GB LRDIMMs, which aren't affordable to me right now.

Some people run Ryzen and it seems to be OK with the absolute latest version of FN. But, keep in mind that you'll be part of the 1%, not the 99% running Intel systems. Your resources will be dramatically lessened when resolving issues. Again, used Supermicro boxes are great here... they're old enough they aren't bleeding edge and are, in most cases, 110% supported by reasonably new versions of FN.

Please don't use USB flash drives for boot. We see a ton of people having weird issues with them as they die, and they're slow as crap. Even a single small SSD is a far more reliable and fast option.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

hoankiem

Dabbler
Joined
Feb 16, 2018
Messages
10
Hello,
Welcome to the forum.

Some of the things you are asking about don't make any sense for use with FreeNAS. For example: There is no reason you would want to do this, unless you plan to do something with the FreeNAS system that you did not tell about. Likewise There is no reason for this, again, unless there is some plan to use the FreeNAS system in some way that you already specifically said that you are not planning to do.
If there is something you do want to use the system for, some things require special hardware to be installed, but not the kind you asked about, so if you say what your plans are, recommendations can be made as to what would actually work.

Here are some references about what hardware will work and how ZFS works

FreeNAS® Quick Hardware Guide
https://forums.freenas.org/index.php?resources/freenas®-quick-hardware-guide.7/

Hardware Recommendations Guide Rev 1e) 2017-05-06
https://forums.freenas.org/index.php?resources/hardware-recommendations-guide.12/

Slideshow explaining VDev, zpool, ZIL and L2ARC
https://forums.freenas.org/index.ph...ning-vdev-zpool-zil-and-l2arc-for-noobs.7775/

Thank you Chris for your very helpful reply.

I had failed to add previously that aside from being a repository for my personal data (especially my DVD/BD/CD rips - I'm a data hoarder obsessive), I hope to gain some lessons with this project so I can design a HIPAA-compliant system in the near future for my brother, who runs a home health clinic or two.

My original assumption (wrong) was that ZIL and L2ARC would be very useful for home/small lab purposes, and I underestimated the importance of more RAM. Thank you for sharing the slideshow which plainly educated me. I'm approaching this similarly to how I design enterprise systems for work, reaching out to experts so I understand the various solutions. I had overestimated the need for ZIL and L2ARC because I follow ServeTheHome and from their articles about using Optane for ZIL it definitely looked amazing.

I was also hoping to use the proposed FreeNAS box as an ESXi store as well, and the slideshow mentions that ZIL and L2ARC would be useful there. However on second thought, I might be better served with some other type of storage array using only SSDs for that purpose.

Instead of buying that new hardware, you should look at something like this:
https://www.ebay.com/itm/Supermicro...-2-6ghz-8-Core-128gb-24-Bay-JBOD/232656106862

It is actually cheaper than the parts you had selected and it already has 128GB of RAM

When I said "keep costs as low as possible" I'd like to clarify that I'm not on a hamstring budget. In fact currently my budget is open-ended, however I don't want to overspend on overkill. My current workstation is a Sandy Bridge, which is pushing on in years. Is it really all right to use a Sandy Bridge-EP/EN based off-lease server?
 

hoankiem

Dabbler
Joined
Feb 16, 2018
Messages
10
You mention having your pfSense box and VM host in a rack. Will your proposed FreeNAS box not fit there as well? There's a TON of off-lease Supermicro systems out there for quite cheap that make killer FreeNAS boxen.

You can add M.2 support with a PCIe card. Don't compromise on your motherboard trying to get a single M.2 slot!

Your RAM requirements are a challenge. The motherboard you've selected, per Asrock, only supports 32GB. Some of the latest E3s will support 64GB. Anything beyond that requires reaching into the E5 world. Again, off-lease refurb Supermicro systems reign here. That's what my system is (see my sig), and I was, fairly affordably, able to reach 192GB. Going beyond that requires 16GB LRDIMMs, which aren't affordable to me right now.

Some people run Ryzen and it seems to be OK with the absolute latest version of FN. But, keep in mind that you'll be part of the 1%, not the 99% running Intel systems. Your resources will be dramatically lessened when resolving issues. Again, used Supermicro boxes are great here... they're old enough they aren't bleeding edge and are, in most cases, 110% supported by reasonably new versions of FN.

Please don't use USB flash drives for boot. We see a ton of people having weird issues with them as they die, and they're slow as crap. Even a single small SSD is a far more reliable and fast option.

Ah, right. Since NVMe is a direct PCIe connection so I can just use a carrier card. I didn't think about that one, thank you!

While I have a few off-lease rack servers based on Xeon E3/E5 v2/v3, I'm a bit wary about using "v1" hardware as it is already quite old. The last rack purchase I made was in late 2016 when the v4 parts came out a few months prior, and was a v3 system.

I wasn't planning to install the FreeNAS box in my rack, however if I had to, or had a change of plan I certainly can do that. The main reason is that my rack is in the garage where it's cooler, and my home office/workbench is across the house. I had intended to plop the FreeNAS in a corner of that room, next to the switch. Another reason why I'm so infatuated with the U-NAS case is because it's small at about 18 liters. A 3U rackmount is about 40 liters, and if I used a regular desktop tower like a Fractal Design R4/R5 it would be 55 liters. Small is good, but it's not a dealbreaker. Maybe I'm just sick of my workstation rig's massive Corsair 900D and neurotically want everything to be small :)

Perhaps in time Ryzen would be better supported. I certainly can't blame the state of CPUs. AMD gave up their leadership position a decade ago, so naturally Intel would be better supported. I'll likely stick with Intel then.

I just saw that the revised U-NAS NSC-810/A both support a dedicated 2.5" mount for the boot drive. Is it imperative to have mirrored boot drives, or one will do? What I mean by that is: if the boot drive dies somehow, would the entire Zpool be lost?

I'll be revising my proposed build sometime tomorrow.
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
The V1 hardware is still quite sufficient for anything you're likely to do, but if you have the budget to go newer, so much the better. It also lets you stock on memory... you can buy 8GB DDR3 ECC sticks off eBay for $20/ea. and get to 192GB like mine quite cheaply. I'm running 40-50 VMs off mine (including some stuff that hits the drives hard, like Splunk and a large Plex library) plus my CIFS datastore, and I've never hit the wall on CPU. You could always buy the chassis and, down the road a few years, drop a new motherboard in.

As far as the choice of chassis, rack chassis are simply nice because they hold lots of drives and they're designed to hold those drives. Other chassis often have issues with cooling, etc... rack stuff just works. Not the cheapest/smallest/quietest, but it gets the job done well.

A single SSD boot drive is an order of magnitude better than even mirrored USB sticks... back up your configuration regularly and, in the worst case, you have to reinstall FreeNAS and reimport your config. You will *not* lose any data if you lose the boot pool.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

hoankiem

Dabbler
Joined
Feb 16, 2018
Messages
10
I think I'd rather spend a bit more to get at least a v3 system. Ideally I'd wait to be honest until next year, but I'm quite sick of my huge Corsair 900D and want to start offloading my long-term data storage to something independent. Right now I have 20TB of stuff on that workstation :x

In your opinion, is a chassis like the U-NAS NSC-8x0 too compromised in terms of motherboard and expandability? It looks like I need to go with at least a mATX motherboard to get 4 DIMM slots if I want to do 64GB RAM at a lower price point.

I really love the Chenbro chassis, but 48 drive bays seems overkill for me right now. At most I'd be running 8 drives, either 8TB/10TB.

On the topic of drives, would one vDev of 8 drives in Z2 be fine? I'd be a bit concerned about possibly losing 3 drives before the FreeNAS can manage to rebuild in time with drives that big. How recommended is Z3 for 8 high capacity drives? I'm not that concerned about throughput, as long as I can push data where it's being constrained by my 1 Gbe network.

Something else I can do is to transplant my workstation to a more modern chassis, downsize to just the SSD and maybe a single big drive, and move the 6x 2TB + 2x 4TB drives to a FreeNAS. Maybe configure the vPool with two vDev: vDev0 (4 x 2TB), vDev1 (4 x 4TB). In this case, I'd pick up two more 4TB drives.
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
The "ZFS police will give you a pony" answer is no wider than 6 disks in RAID-Z2 and 11 disks in RAID-Z3. If I was building something in one of those U-NAS cases and wanted to put all 8 drives in a pool, I'd probably do RAID-Z2 and call it a day. Keep in mind, that means you only have one vdev... which gives you the IOPS of one drive. Fine for storing media, etc., but not recommended for anything with lots of random access.
 

hoankiem

Dabbler
Joined
Feb 16, 2018
Messages
10
:\ This is getting quite complicated! I appreciate your replies though Chris and tvsjr; I've learnt a lot.

So while at work today, multitasking during a design call, I was thinking about sizing, and it looks like it might be optimal to do a vPool of two 6-disk RAID-Z2 vDev. If I am using the 8TB disks, that would get me 64TB of unformatted storage.

Upon further reading hot spares seem like a good idea too, so if I have 2 hot spares assigned to the vPool, that would mean I'd need a 14-bay chassis. Unfortunately this means that non-server chassis (like the U-NAS NSC-8x0 or Fractal Design R5) won't be able to support this configuration.

This is the current parts list I've come up with (please don't mind the chassis and PSU, they're just placeholders):
https://pcpartpicker.com/list/cBvVnH

It looks like DDR4 is still quite expensive and hasn't settled down in price after last year's spike. At this point I'm debating whether to buy new, or checking eBay regularly to see if there are any decent off-lease systems that cost ~$2,000 without disks.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
It looks like DDR4 is still quite expensive and hasn't settled down in price after last year's spike. At this point I'm debating whether to buy new, or checking eBay regularly to see if there are any decent off-lease systems that cost ~$2,000 without disks.
That server I suggested a few posts back is still available... It will do all you want and more and it is 'only' $1039.99 plus shipping, which is less than the parts you have picked out.
 

hoankiem

Dabbler
Joined
Feb 16, 2018
Messages
10
I've been playing with parts lists and upon checking the historical prices, I might be better served if I wait a few months for the November sale.

I did more reading on RAID-Z vs mirrored vdevs, I came to the conclusion that since hard drives are relatively cheap, I should be using mirrored vdevs to avoid issues with striped arrays having degraded performance for too long when a drive fails -- or god forbid, all parity drives fail.

Also if I wait it out, Intel is also supposed to be releasing new silicon that fixes Meltdown/Spectre, and possibly AMD support will be better. Right now a big concern at work is our older v1/v2 servers will incur a big performance hit when the microcode is applied, yet there isn't any clear direction from Intel how much of a hit will cost. We are also stuck since some servers are about 80% capacity, and Meltdown/Spectre is making it a bit tough to plan the next 10-year design.
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
Why did you come to that conclusion? Striped mirrors actually increase your chances of pool loss... RAID-Z2 will survive any two drives failing. 2-way striped mirrors will only survive one... you might get lucky and drop 2 drives in different vdevs, but 2 drives in one vdev is fatal.

I'm sitting here moving files around my 6-disk RAID-Z2 array at north of 300MB/sec. with no trouble. If you're not looking for something that can push massive IOPS, you're just fine running RAID-Z2.
 

hoankiem

Dabbler
Joined
Feb 16, 2018
Messages
10
Ah let me clarify... I meant a vpool of mirrors, e.g. vdev0: 2x mirrored disks, vdev1: 2x mirrored disks, etc. I figure 4 vdev of mirrored 8TB disks would be sufficient for my needs, even with 50% storage efficiency. I might be misunderstanding the FreeBSD ZFS and FreeNAS documentation though. Is this still considered a striped pool of mirrors? Please kindly correct me where I’m wrong.

Another soft constraint is I really don’t want to use up 4U or rack space since I’m eyeing filling it with a deep learning box. I also don’t go out to the garage often except when something in the rack breaks down (rare), and would like the FreeNAS to be close to my workbench. All reasons that probably seem silly in the end.
 

hoankiem

Dabbler
Joined
Feb 16, 2018
Messages
10
Another thought is that 1 GbE only theoretically maxes out at 125 MB/s. If the vpool can do 300 MB/s I’d be hard pressed to saturate that, unless I have the NICs bonded and two clients are simultaneously pulling the theoretical max of 1 GbE, which is unlikely for my use case of “storing stuff” and occasionally streaming it.
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
Yes, that's a pool of striped mirrors.

If you're looking at bulk storage, throw 8 of the largest drives you can afford in there, set up a pool with one 8-drive RAID-Z2 vdev, and be happy.
 

hoankiem

Dabbler
Joined
Feb 16, 2018
Messages
10
But with more disks in a vdev that means the resilvering process will be longer if I’m not mistaken. Is the 2014 comment by Matthew Ahrens still relevant?:

“For best performance on random IOPS, use a small number of disks in each RAID-Z group. E.g, 3-wide RAIDZ1, 6-wide RAIDZ2, or 9-wide RAIDZ3 (all of which use ⅓ of total storage for parity, in the ideal case of using large blocks). This is because RAID-Z spreads each logical block across all the devices (similar to RAID-3, in contrast with RAID-4/5/6). For even better performance, consider using mirroring.”

My original idea before switching to the mirrored striped pool was to have two vdev of 6-disk wide RAID-Z2. I suppose that would mean now I’d be striping two RAID-Z2 in the pool which brings it back to the point you raised about striping. However since I have two possible disks that can fail per vdev, it should still be fine right?

Then there’s the issue of chassis. I just remembered that the NZXT H440 can do 11 disks put directly behind the triple 140mm fan array. If the last disk was on a dual sled rather than chassis mounted so it could hold 12 disks it would’ve been perfect since the chassis is only about 55 L (roughly same size as the Fractal Design R5).
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
Yes, parity RAID takes longer to resilver. But, so what? That's what it's designed to do. At some point, you have to determine what your level of paranoia is. I mean, the most performant and paranoid solution would be something like 48 drives in 3-way striped mirrors... but that's rather impractical.

From an IOPS perspective, the IOPS of your pool is the sum of the IOPS of the slowest drive in each vdev. That's why we recommend striped mirrors for VM workloads, because they are usually quite IO heavy. But, if you're storing media, you don't need ludicrous IOPS.

If you're doing 12 drives, then yes, I'd do 2 6-drive vdevs of RAID-Z2.

If you want to be paranoid, you should be considering your off-site backup. Rather than getting insane with the drives, it would be a much better solution to build two identical boxes, locate the second one off-site, run a VPN tunnel between them, and back up your main system with replication.
 
Status
Not open for further replies.
Top