Hardware Recommendations Guide

Hardware Recommendations Guide Discussion Thread Rev 2a) 2021-01-24

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Of course I see your ambition to keep the Hardware Guide brief and concise, but one additional (foot-)note about consumer Atoms int the CPU section on page 6 might help to avoid confusion, stating something like:

With the advent of Silvermont/BayTrail in 2013 Intel rebranded consumer Atom SoCs as Pentium/Celeron Jxxxx (desktop variants) and Pentium/Celeron Nxxxx (mobile variants).
I don't think it's necessary, since they're only available as BGA parts and the respective motherboards are nowhere to be seen in the recommendations.
 

scwst

Explorer
Joined
Sep 23, 2016
Messages
59
Just as a remark: One thing I belatedly noticed is that the document assumes that storage will be on conventional drives, not SSDs. That's probably the case for 99.9 percent of systems these days, but if my math is correct, a GByte of SSD is now "only" ten times as expensive as a GByte of spinning rust, so that will change at some point in the near future.

I've tried to find a thread about what changes when your pool is built out of SSDs, but trying to search for "SSD" or "flash" and "pool" is rather pointless. Has this been discussed at some point already?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
There isn't much data on the subject. That said, there shouldn't be any major differences (more RAM than for HDDs will probably be needed to achieve the pool's potential).

The only major change would be NVMe storage on the same scale as SAS, but that's still very expensive.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
With respect to SSD storage as a pool/vdev, I would think that there are several factors to consider that would directly affect the life of the SSD and thus which type of SSD to purchase. Do a Google search for "zfs pool vdev ssd". The main thing which comes to mind is price. If I were to build a pool out of SSDs, right now I would not use less that the Samsung 850 Pro but would prefer Samsung SM863.
but if my math is correct, a GByte of SSD is now "only" ten times as expensive as a GByte of spinning rust, so that will change at some point in the near future.
And as for cost, I don't see this gap getting that much closer in the next 5 years, meaning that spinning rust will still be substantially cheaper than a SSD per GB. If anything I do see the price of spinning rust dropping all the time and some sales are really good. I think that once SSDs could be offered up at a competitive price to spinning rust, meaning the companies are willing to make less profit, than there will be a shift.

Should SSDs as a pool/vdev be added to the Hardware Guide, I don't think so. If anyone has that kind of money, I would expect them to do some research before they spend many thousands on it. Realy, if you wanted to make a ~7TB pool which is small for a home user (average for me though), but maybe there is a specific use case. You buy three Samsung MZ-7LM3T8Z PM863 Series 3840GB drives and build a RAIDZ1, thus you end up with a ~6.9TB pool. Cost of those 3 drives is ~$5,500 USD. You could use different drives and a RAIDZ2 configuration and the price will of course change. My point, the price is very high. If you were a data center then you could likely justify the costs because your business would pay for itself in longevity, power consumption, air conditioning, etc... As a home user, it's pretty difficult to justify.

In the future SSD prices will drop but I don't see that in the near future where they replace that spinning rust in the near future.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Update on the glossary idea:
I'll be a bit pressured for time for the foreseeable future, but the idea is not forgotten. I'll integrate it with an aesthetic revision to make the whole thing look less like a document quickly thrown together using Word's default style.

Depending on release cycles, it may come before, with or after the first major content revision (Skylake-EP, probably). Minor product updates will still happen on an as-needed base, to keep up with new but noteworthy variants, possible mass defect situations and whatever else the industry throws at us.

tl;dr - I now consider the document to be stable
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I noticed that the E3-1240v5 is recommended as a higher end E3. It probably makes sense to recommend the E3-1230v5 instead. Its the more practical CPU. The 1240 is 10% more expensive for 0.1Ghz more performance... (ie 3%), it just doesn't make sense vs the 1230v5, which adds hypertheading over the 1220v5 :)

Reality is, if someone wants an E3, they'll be steered to either the 1220 or 1230.
 

hyperq

Dabbler
Joined
Sep 6, 2015
Messages
10
X10SDV boards should also be included for Xeon D's much higher RAM limit and much lower power consumption, not to mention the smaller m-ITX size, which can easily fit in an Node 304 case.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
X10SDV boards should also be included for Xeon D's much higher RAM limit and much lower power consumption, not to mention the smaller m-ITX size, which can easily fit in an Node 304 case.
They are mentioned, but there are way too many options. I invite anyone who wants a closer look at these to write a guide explaining the various models, which I can reference and not have to go through every single model myself.
 
Joined
Feb 2, 2016
Messages
574
Maybe I'm the oddball, but I see SSD pools as here and now given so many people are using FreeNAS to host virtual machines.

We have a 2TB SSD pool that hosts 15 XenServer VMs (using about 400GB) made with four cheap, off-brand SSDs (*). That pool cost $900 and massively improved our virtual server performance. (Bulk data, when required, is stored to conventional hard drives then the VMs mount that data.)

For a hobbyist just looking to store torrented Blu-ray rips, that's expensive storage. For small and medium businesses who are using FreeNAS in place of commercial solutions, $900 is easily justified. By the time you factor in the cost of the server, 10Gbe switch, conventional disks, etc., the SSD expense looks almost inconsequential given the performance increase.

In any case, if there are best practices or suggestions involving SSDs as something other than SLOG/L2ARC, it would be nice to include that in the hardware guide. If not a specific recommendation, maybe a 'you should consider SSDs for storage if you meet the following criteria:'?

Cheers,
Matt

(* ADATA Premier SP550, mirrors striped - We're comfortable using cheap SSDs because we snapshot and replicate often, tightly monitor our systems and are okay with the half an hour of downtime it would take to fall over to the replicated VM pool while the primary is offline. Your mileage may vary. We could have hosted all our VMs with a pair of mirrored 1TB Samsung 850 PROs for the same price but preferred greater over provisioning to reduce wear and twice the IOPS.)
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Maybe I'm the oddball, but I see SSD pools as here and now given so many people are using FreeNAS to host virtual machines.

We have a 2TB SSD pool that hosts 15 XenServer VMs (using about 400GB) made with four cheap, off-brand SSDs (*). That pool cost $900 and massively improved our virtual server performance. (Bulk data, when required, is stored to conventional hard drives then the VMs mount that data.)

For a hobbyist just looking to store torrented Blu-ray rips, that's expensive storage. For small and medium businesses who are using FreeNAS in place of commercial solutions, $900 is easily justified. By the time you factor in the cost of the server, 10Gbe switch, conventional disks, etc., the SSD expense looks almost inconsequential given the performance increase.

In any case, if there are best practices or suggestions involving SSDs as something other than SLOG/L2ARC, it would be nice to include that in the hardware guide. If not a specific recommendation, maybe a 'you should consider SSDs for storage if you meet the following criteria:'?

Cheers,
Matt

(* ADATA Premier SP550, mirrors striped - We're comfortable using cheap SSDs because we snapshot and replicate often, tightly monitor our systems and are okay with the half an hour of downtime it would take to fall over to the replicated VM pool while the primary is offline. Your mileage may vary. We could have hosted all our VMs with a pair of mirrored 1TB Samsung 850 PROs for the same price but preferred greater over provisioning to reduce wear and twice the IOPS.)

I agree that all-flash pools are the here and now, or will soon be, just like 10gbe

Just not sure there are actually community recommendations yet :)
 
Joined
Feb 2, 2016
Messages
574
I agree that all-flash pools are the here and now, or will soon be, just like 10gbe Just not sure there are actually community recommendations yet :)

In that case, here's my community recommendation...

"
SSDs are awesome and hella fast but comparatively expensive. Don't be afraid of them. Consider an SSD-only pool when speed is required over capacity. Virtual machines especially love the performance SSDs can bring and often fit in a small footprint.

While SSDs are often used as L2ARC, that may not be the best place for them since L2ARC eats RAM that may be best used by ARC. Your biggest speed boost may be an SSD pool instead of L2ARC especially in RAM-limited systems.

While SLOG and L2ARC require enterprise-grade, mega-expensive SSDs (battery-backed, high write endurance, low latency) SSDs, storage pools can use consumer-grade SSDs. Even low-performing SSDs are a magnitude faster than conventional hard drives and no less reliable.
"

I'll leave it to the Editor to figure out if that's close enough to true to be a valuable recommendation. ;)

Cheers,
Matt
 
Joined
Mar 22, 2016
Messages
217
Just wait till NVMe SSD pools start showing up. Those are a magnitude faster than a SATA SSD. Hopefully I'll be able to shed some light on that subject soon(ish).
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Just wait till NVMe SSD pools start showing up. Those are a magnitude faster than a SATA SSD. Hopefully I'll be able to shed some light on that subject soon(ish).
Guinea pigs are always welcome. :p
 

wblock

Documentation Engineer
Joined
Nov 14, 2014
Messages
1,506

scwst

Explorer
Joined
Sep 23, 2016
Messages
59
The only large study I've found so far about durability of SSDs is Flash Reliability in Production: The Expected and the Unexpected (https://www.usenix.org/node/194415) by Google. I haven't read the gory details yet, but one main takeaway seems to be something like "SSDs fail less often, but have more data errors", which sounds like it's just crying out for ZFS (or maybe I'm turning into a religious ZFS zealot).

Given how much faster SSDs are, I would speculate (until corrected by somebody more experienced) that RAIDZ1 would be in play again for larger vdevs where we'd want to use RAIDZ2 for HDs now. Also, I'm not aware that FreeNAS has any mechanism for checking how "worn down" a SSD is and warning that the drive should be replaced?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Also, I'm not aware that FreeNAS has any mechanism for checking how "worn down" a SSD is and warning that the drive should be replaced?
Assuming the SSD provides the relevant data, it would be easy to add.
or maybe I'm turning into a religious ZFS zealot
If you worship Cthulhu, you already are.
 

Morpheusn

Cadet
Joined
Oct 20, 2016
Messages
1
In the mini-ITX part of the motherboard, besides ASRock Rack E3C236D2I, I also recommend
C236 WSI, which has 2 more SATA, if you don't need IPMI control.
 

FreeNASBob

Patron
Joined
Aug 23, 2014
Messages
226
If I'm not too late to the party, I think it might be helpful to new prospective users if there was at least some kind of footnote or list of hardware that has been known to have problems (or a high failure rate). In my case it would have helped to know that ASRock's boards were failing for many FreeNAS users after two years and that ASRock was not addressing the issue, and even replacing them with more defective boards. I'm not trying to be vindictive to ASRock, but people are building FreeNAS systems specifically for reliability and safety of their data. Hardware that has proven unreliable should raise big red flags for FreeNAS builders.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
If I'm not too late to the party, I think it might be helpful to new prospective users if there was at least some kind of footnote or list of hardware that has been known to have problems (or a high failure rate). In my case it would have helped to know that ASRock's boards were failing for many FreeNAS users after two years and that ASRock was not addressing the issue, and even replacing them with more defective boards. I'm not trying to be vindictive to ASRock, but people are building FreeNAS systems specifically for reliability and safety of their data. Hardware that has proven unreliable should raise big red flags for FreeNAS builders.
I tend to agree. The next release, presumably in early February, will mention the C2x50D4I issues.

That said, that issue has been worked around for now. It's not ideal, but boards should stop dying left and right.
 

FreeNASBob

Patron
Joined
Aug 23, 2014
Messages
226
From the watchdog timer issue, perhaps. I think they're still dropping dead from the failure of the voltage sensors though.
 
Top