Hardware Recommendations Guide

Hardware Recommendations Guide Discussion Thread Rev 2a) 2021-01-24

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
SLOG devices should be high-end PCIe NVMe SSDs, such as the Intel P3700. The latency benefits of the NVMe specification have rendered SATA SSDs obsolete as SLOG devices, with the additional bandwidth being a nice bonus.

SLOG devices should be high-end PCIe NVMe SSDs with Power Loss Protection...

Otherwise we'll see a bunch of Samsung sm960 based slogs ;)
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
SLOG devices should be high-end PCIe NVMe SSDs with Power Loss Protection...

Otherwise we'll see a bunch of Samsung sm960 based slogs ;)
Very important point that I completely forgot to mention.
 

MrToddsFriends

Documentation Browser
Joined
Jan 12, 2015
Messages
1,338
SLOG devices should be high-end PCIe NVMe SSDs with Power Loss Protection...

I'm wondering if the requirement for "power loss protection" should be stated more precisely as "in-flight data power loss protection".

At least on Micron/Crucial SSDs there are two different implementations of power loss protection:
- In-flight data protection as seen on the (enterprise level) Micron M500DC using tantalum capacitors
- Data-at-rest protection on client SSDs like the Micron M600 and the Crucial MX100 and MX200 using tiny ceramic capacitors

References:
Anandtech Micron M600 SSD Review (section "The Truth About Micron's Power-Loss Protection" on page 1)
Anandtech Crucial MX200 SSD Review (last paragraph on page 1)

While these Anandtech articles are reviewing SATA SSDs, I think it's not unlikely that we will see those different levels of power loss protection on M.2 PCIe NVMe SSDs as well. And many users will be tempted to use something cheaper (and more compact) than a Intel P3700.

Am I overcautious here? The openzfs wiki mentions the (data-at-rest protected) Crucial MX200 as having power loss protection, being inexpensive and being suitable as an SLOG device. OTOH, the Intel Power Loss Imminent PLI Technology Brief explicitely talks about "in flight" data.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I'm wondering if the requirement for "power loss protection" should be stated more precisely as "in-flight data power loss protection".

At least on Micron/Crucial SSDs there are two different implementations of power loss protection:
- In-flight data protection as seen on the (enterprise level) Micron M500DC using tantalum capacitors
- Data-at-rest protection on client SSDs like the Micron M600 and the Crucial MX100 and MX200 using tiny ceramic capacitors
Way ahead of you, I literally fixed that section a few minutes ago and made it clear that older Crucial/Micron consumer models don't cut it.

Am I overcautious here? The openzfs wiki mentions the (data-at-rest protected) Crucial MX200 as having power loss protection, being inexpensive and being suitable as an SLOG device. OTOH, the Intel Power Loss Imminent PLI Technology Brief explicitely talks about "in flight" data.
No, it's a valid concern. Besides, those older drives are too slow to be useful.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
There are a few more fixes and tweaks lined up, so I'm expecting to publish revision c tomorrow afternoon. No, I'm not going to keep up the one-every-two-days release cadence forever. :p
 

scwst

Explorer
Joined
Sep 23, 2016
Messages
59
Uh, am I just not awake yet or is there no section on Ethernet etc? Maybe just a paragraph that points out some of the speed questions such as 60 MByte/sec being realistic for 1 Gbit Ethernet so building that cool arrary of SSDs is pointless. Also, a link to the 10 Gbit Ethernet primer or at least a mention that they are so expensive your head will explode might be nice (I still don't understand why those cards are so friggin' expensive, BTW).
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
60 MByte/sec being realistic for 1 Gbit Ethernet
That's actually rather slow, not realistic for GbE.

As for talking about network hardware, I'll have to think about it.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I'm not convinced a discertation on networking speeds belongs.


But, its Hardware Recommendations not a networking text book. Gigabit is the default. You should not be getting hardware less than gigabit. If you get gigabit, you want an Intel Nic. If you want faster than gigabit, then you are basically limited to 10gigabit. With 10gigabit, you are limited to Chelsio or Intel NICs. SFP+ or 10gbaseT.

Gigabit = 1,000 megabit/s = upto 120MB/s
10gbe = 10,000 megabits/s = upto 1,200MB/s

(insert the correct numbers ;))

That's pretty much the end of the Hardware Recommendations as far as networking.
 
Last edited:

scwst

Explorer
Joined
Sep 23, 2016
Messages
59
Actually, I was just thinking more of "if you want to run a pool of SSDs and actually get the most out of it, you'll need 10 GbitE, which means Chelsio or Intel", or "your onboard 1 GbitE will be saturated even by a 'slow' HD with 5k rotation". Given the way SSD prices are falling, we're going to have a lot of people starting to think about 10 GbitE even at those completely ridiculous prices.

Also, should we at least mention the words "fiber channel" somewhere or is that overkill for the intended user base?
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Actually, I was just thinking more of "if you want to run a pool of SSDs and actually get the most out of it, you'll need 10 GbitE, which means Chelsio or Intel", or "your onboard 1 GbitE will be saturated even by a 'slow' HD with 5k rotation". Given the way SSD prices are falling, we're going to have a lot of people starting to think about 10 GbitE even at those completely ridiculous prices.

Also, should we at least mention the words "fiber channel" somewhere or is that overkill for the intended user base?

Although fiber channel and infiniband do work (I believe) they are very far from the mainstream.

Re: SSDs and 10gbe. This and PCIe NVMe. I think there is about to be a threshold passing.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I don't think there's much of a point in saying "You'll be limited to gigiabit speeds when using GbE, despite the rest of your system being capable of more".
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
R1c) is out. The optimist in me expects this to be the last version in a while.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
Thanks for the efforts put in @Ericloewe !
Here are some of my suggestions.

p. 1: Chassis [suggestion for replacing the introduction]

Choosing a chassis solution depends on several factors that all interplay and form different categories. Generally factors include hard drive temperature, connectivity (to address enough capacity in terms of both data cabling and power) of the drives and noise/heat considerations.

It is imperative to pay close attention to temperatures. One recommended system solution may not exhibit the same temperature characteristics as a fairly similar setup. Ambient temperature, cabling management and other restrictive factors such as clogged filters impact temperatures and therefore drive health, significantly.

In the following section three typical categories are outlined with included recommendations.


-----

The following is a suggestion to mention with regards to future proofing a system. (The following text started out a a brief outro to the chassis section, but grew into something more important:



The system categories mentioned in the chassis section aim to address the total lifespan of the system. That is, beyond the initial purchase including future upgrades before the machine is too old for duty.
Some users can afford building a system that will cover all your storage needs for the next ~5 years until the hardware has expired - great. Others might consider getting additional storage space prior to the end of the initial life span of the system. This section will be discussing an approach to this topic.

Designing future proof is difficult, but will be highly rewarding if considered during the initial design phase. A system design can easily become over-optimized to its first form or conversely, highly capable of incorporating a substantial upgrade without the need of rebuilding the entire system!
However, upgrading a FreeNAS system entails many considerations on multiple levels and therefore needs some attention. That is the topic of this section.

We encourage every user to assess their future data growth. Select a period of a few years ahead, guesstimate the yearly data growth. Take into account the <usable> space that is calculated via Bidule0hms calculator (a lot of space is dedicated to redundancy and ZFS!). For example, 6st 4TB drives in raidz2 is not ~24TB storage space, but rather closer to 12.6TB following the calculator. Use the latter for guesstimating your data expansion plan.

When considering to add storage you are introducing challenges for your power supply, RAM capacity, SATA ports capacity as well as cooling and fitting the drives into a case. Depending on your design choices an upgrade can either become easily accommodated or ...rather impossible to implement which forces a significant expenditures to recover from poorly executed planning from the design process.

Here follows suggestions on parameters and an example of how to approach the assessment task:

Let's pretend the initial system consists of a system SSD and 6 x drive (4TB) wide raidz2 vdev and 32GB of RAM, all located on the moterboard's 8 SATA ports. This requires a chassis and power supply capable of 8 drives.
Now make a on-paper design of the estimated upgraded system's "final form" of "beyond any foreseeable capacity requirement during the life expectancy of the box".
In this case, we'd like to add another 6x drive raidz2 vdev to our pool. This time, using 10TB drives.
A couple questions arise:
A) Can the case fit total number of drives? -> Would I need to step up to a larger category chassis?
B) Additional SATA ports needs to be added, in this case a 8 port HBA would be cost efficient drives to be attached. -> Do I fall in the range of being better off looking for a rack mount chassis with an expander backplane, rather than adding numerous HBA?
C) Will the selected power supply be capable of handling the additional load? (check the power supply-sizing guide here) -> Would I need to replace the PSU or Do I need to size up the PSU from the start?
D) Is the initial amount of RAM enough to handle the increased space? Can RAM be upgraded to meet the requirements on the selected motherboard? -> Do I need to select another platform that supports sufficient amounts of RAM?

When these steps are assessed you'll have a pretty good idea on what components that will become replaced upon a significant upgrade of the system.
It is during this phase you can make informed decision on how to minimize future wastes by designing the initial build to encompass a life extending upgrade in the future, for a vastly improved total lifespan.


---------

On p. 6 CPU
I'm not convinced there is a place to keep the X10 system recommendations in this document. I have not seen a single post in the forum the last couple of months in which the OP has chosen a X10 system and the contributors of the forum have NOT decided to suggest an X11 system. X11 suggestions are the norm, which should be reflected in the document. In its present form, the EOL products are granted way too much attention. A suggested solution would be to add, to the end of each CPU description with parentheses something like

Beyond this I'd like to decrease the number of example CPU's provided. I feel some of them are redundant, judging from the recommendations typically provided in the forums.

edit : I figured I'd get into redoing the entire CPU section with the following improvements in mind: Simplifying the layout, and reducing models/clutter meanwhile trying to add some flesh to the descriptions.




CPU

The major restriction on CPU choice for a FreeNAS server is support for ECC RAM. Intel Core i5 and Core i7, as well as consumer Atom, do not support ECC functionality.
Most workloads can be handled by LGA115x – the main reason for Xeon E5 CPUs being support for larger amounts of RAM and, to a lesser extent, additional PCIe connectivity. Haswell/Broadwell and older Xeon E3 are limited to 32GB of RAM and Skylake CPUs are limited to 64GB of RAM.

Four classes of CPU power are presented below with 'go-to' models. Typically it is rarely considered a significant upgrade to aim for a more powerful model within the same processor family (Pentium, i3, E3) with the exception of the very heavy duty models.

Light usage
For servers expected to do little more than provide file sharing, a low-end CPU from the Pentium line often suffices.

Intel Pentium G4400 (LGA1151)
A popular choice for light work loads. Two cores, no turbo boost and no hyperthreading. A significant upgrade over earlier generations is the support for AES-NI, which used to only be available from Core i3 and upwards. (X10 - previous generation's corresponding model is G3220)

Intel C2550 (Avoton - SOC)
Only available embedded in boards, the C2550 uses four Atom cores and is capable of respectable performance in realistic workloads. Though it supports up to 64GB of RAM, the required 16GB DDR3 UDIMMs are prohibitively priced, making a Xeon E5 system notably cheaper.

Medium usage
Some users may have more significant requirements that necessitate a faster CPU. Heavy users of Jails/Plugins/VMs tend to fall into this category, as do users who require regular transcoding. Some specific features not available on lower-end CPUs may also necessitate an upgrade to this category.

Intel Core i3-6100 (LGA1151)
An entry level capable of handling most small server workloads with ease. Two cores with hyper-threading but no turbo boost. (X10 - previous generation's corresponding model is i3-4340)

Heavy usage
Generally speaking, the typical heavy workload involves multiple concurrent, high-quality transcodes.

Intel C2750 (Avoton)
Only available embedded in boards, the C2750 uses eight Atom cores and is capable of surprisingly good performance in realistic workloads, being able to handle several simultaneous transcodes and respectable speeds over 10GbE networks. Though it supports up to 64GB of RAM, the required 16GB DDR3 UDIMMs are prohibitively priced, making a Xeon E5 system notably cheaper.

Intel Xeon E3-1230 v5
A higher-end LGA 1151 Xeon E3 model. Four cores with hyperthreading and turbo boost. This is a typical workhorse and very popular model. For a very versatile system, including capacity for several PLEX streams, this is the go-to CPU.

Xeon E5-1620 (LGA2011-3)
This is the entry level CPU to get into the E5 platform, primarily to get past RAM capacity restrictions of the E3 – series. Performancewise it is slightly more powerful than the E3-1230.

Very heavy duty
These beats can handle whatever you can imagine throwing at them.

Xeon E5-1650 (LGA2011-3)
The Xeon E5-1650 is a true power house and popular six-core model. It is a lot faster than the E5-1620 and therefore merits mentioning even if it is included in the same family.

Xeon E5-2xxx and E5-4xxx
An immense number of models exists to suit nearly all tastes. E5-2xxx CPUs support up to two sockets and E5-4xxx CPUs support up to four sockets. LRDIMM support is included. Typically it is adviced to not aim for lower clock frequency models, since they are outperformed by the E5-1650. Unless there is a very specific usecase, don't sacrifice clock speed for core count.


Cheers, Dice
 
Last edited:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I'm not convinced there is a place to keep the X10 system recommendations in this document.
X10 still works, is still more broadly available (I can still get an X10SL7-F from Amazon, but they don't have the X11SSL-CF) and really isn't substantially inferior. Hell, the only reason I got an X11SSM-F instead of annother X10SLM+-F was because I wanted to help the community by documenting it (and I wanted to try out using a cheap SSD as a boot device).

It might also end up being cheaper, if a good deal is to be found. The differences are laid out, so it's up to the reader to determine what makes most sense in their situation.

The following is a suggestion to mention with regards to future proofing a system. (The following text started out a a brief outro to the chassis section, but grew into something more important:



The system categories mentioned in the chassis section aim to address the total lifespan of the system. That is, beyond the initial purchase including future upgrades before the machine is too old for duty.
Some users can afford building a system that will cover all your storage needs for the next ~5 years until the hardware has expired - great. Then this post is not for you.
Others - to which this topic is directed - might consider getting additional storage space prior to the end of the initial life span of the system.

Designing future proof is difficult, but will be highly rewarding if considered during the initial design phase. A system design can easily become over-optimized to its first form or conversely, highly capable of incorporating a substantial upgrade without the need of rebuilding the entire system!
However, upgrading a FreeNAS system entails many considerations on multiple levels and therefore needs some attention. That is the topic of this section.

We encourage every user to assess their future data growth. Select a period of a few years ahead, guesstimate the yearly data growth. Take into account the <usable> space that is calculated via Bidule0hms calculator (a lot of space is dedicated to redundancy and ZFS!). For example, 6st 4TB drives in raidz2 is not ~24TB storage space, but rather closer to 12.6TB following the calculator. Use the latter for guesstimating your data expansion plan.

When considering to add storage you are introducing challenges for your power supply, RAM capacity, SATA ports capacity as well as cooling and fitting the drives into a case. Depending on your design choices an upgrade can either become easily accommodated or ...rather impossible to implement which forces a significant expenditures to recover from poorly executed planning from the design process.

Here follows suggestions on parameters and an example of how to approach the assessment task:

Let's pretend the initial system consists of a system SSD and 6 x drive (4TB) wide raidz2 vdev and 32GB of RAM, all located on the moterboard's 8 SATA ports. This requires a chassis and power supply capable of 8 drives.
Now make a on-paper design of the estimated upgraded system's "final form" of "beyond any foreseeable capacity requirement during the life expectancy of the box".
In this case, we'd like to add another 6x drive raidz2 vdev to our pool. This time, using 10TB drives.
A couple questions arise:
A) Can the case fit total number of drives? -> Would I need to step up to a larger category chassis?
B) Additional SATA ports needs to be added, in this case a 8 port HBA would be cost efficient drives to be attached. -> Do I fall in the range of being better off looking for a rack mount chassis with an expander backplane, rather than adding numerous HBA?
C) Will the selected power supply be capable of handling the additional load? (check the power supply-sizing guide here) -> Would I need to replace the PSU or Do I need to size up the PSU from the start?
D) Is the initial amount of RAM enough to handle the increased space? Can RAM be upgraded to meet the requirements on the selected motherboard? -> Do I need to select another platform that supports sufficient amounts of RAM?

When these steps are assessed you'll have a pretty good idea on what components that will become replaced upon a significant upgrade of the system.
It is during this phase you can make informed decision on how to minimize future wastes by designing the initial build to encompass a life extending upgrade in the future, for a vastly improved total lifespan.
It's a good text, and I'd suggest you put it in a new resource, since you clearly put a lot of thought into it and it might be useful.

I feel it doesn't make sense in the hardware recommendations guide - mostly because I don't want it to the become The FreeNAS Book (much like what pfSense has). The guide is massively long as it is, so I'm trying to keep tangents to a minimum.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
The differences are laid out, so it's up to the reader to determine what makes most sense in their situation.
Check.
I feel it doesn't make sense in the hardware recommendations guide.
It's a good text, and I'd suggest you put it in a new resource, since you clearly put a lot of thought into it and it might be useful.
thanks. That's a good idea.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Nice update. I agree, X10 motherboards should remain in the document. We should not alienate superseded hardware too quickly. If it exceeds the minimum requirements and works with today's software then it should remain. Once it fails to be a reasonable workhorse for the software, or the cost is very high, then it should be retired, or something similar. This would go for all hardware in the document, not just the motherboards.

Keep up the good work.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
One might ask: "Why not keep X9, too?"

Well, X9 is getting harder to find, but the big reason is that most CPUs (except for two Xeon E3 v2 models) have been EOL'ed and are thus hard to find.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
There are a lot of things you can put into that guide but if you start listing everything then it becomes a mess to manage. I am perfectly fine with X10 and X11 MBs as well as the others you have listed. There was only one omission, AMD stuff. That is okay, not many people feel comfortable with that hardware. I'd still be running it if the MB supported 64GB RAM, for my ESXi server of course but no such luck.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
There was only one omission, AMD stuff.
The main problem is that it's old. I'm really hoping that Zen is as awesome as AMD needs it to be, because Intel needs a fire lit under their asses.

At this point, however, I find it very hard to recommend Bulldozer iteration number 29 for anything, especially for a server.
 

MrToddsFriends

Documentation Browser
Joined
Jan 12, 2015
Messages
1,338
Of course I see your ambition to keep the Hardware Guide brief and concise, but one additional (foot-)note about consumer Atoms int the CPU section on page 6 might help to avoid confusion, stating something like:

With the advent of Silvermont/BayTrail in 2013 Intel rebranded consumer Atom SoCs as Pentium/Celeron Jxxxx (desktop variants) and Pentium/Celeron Nxxxx (mobile variants).
 
Top