Feedback on NAS/Mini-server builds

neofusion

Contributor
Joined
Apr 2, 2022
Messages
159
I'm looking at getting a new system that will fill the function of NAS and mini-server in my home.
The hardware will replace an ageing Synology 10-drive setup that is currently serving as mostly just a storage device.

Build goals:​

My plan is to serve SMB-shares with my media library, mostly consisting of movies stored on HDDs configured in a raidz2 pool. No need to transcode.
In addition to that I want to serve mirrored SSD pool as an iSCSI target to local computers. (obviously not to more than 1 initiator at the same time)
Finally I would like to host a few limited VMs doing mostly light-weight work.

Caveats:​

6-8 SOC SATA-ports will not be enough, so I will need a controller acting as an HBA.
I would prefer to use a well supported Chelsio SFP+ PCI-E card.
The use of iSCSI and VMs suggest that I should avoid anaemic CPUs.
I am not sure if I will be running Truenas on bare metal or as a VM through something like ESXi.
With the above in mind, I value power efficiency and low noise when idle.


There are a lot of options but I have distilled it down to these picks.

Shared details​

ProductName
CaseFractal Design Define R5 with 5 120 - 140 mm fans (Repurposed to server)
CPU fanNoctua NH-D14 SE2011 (Repurposed to server if compatible)
RAM64 GB of QVL ECC RAM
PSUSeasonic Focus GX 650
HBALSI 9201/9207/etc HBA IT-Flashed
SFP+ AdapterChelsio T520-variant. Well supported with hardware offload
Spinning drives6 - 8 high capacity drives, undecided on brand and size
BootSamsung SSD 850 PRO 128GB (Repurposed to server) or a SATADOM
SLOGIntel Optane M10 SSD M.2 2280 32GB
L2ARCI do not anticipate needing one

Option 1:​

Intel E-2236 (or E-2136 depending on availability)
Supermicro X11SCH-F

Pros:​

Has two PCI-E x8 (or longer) slots fitting an HBA and and SFP+ card.
Should hopefully be able to scale up to handle the strain of 10GbE iSCSI and VM use while offering good idle W for when not in use.
Plenty (8) of onboard SATA ports combined with 2x m.2 slots.

Cons:​

Does not appear to support ECC fully since multi-bit detection is not listed in the specs. Is 1-bit memory correction seen as adequate?

Option 2:​

AMD Ryzen 3700X (Repurposed to server)
ASRock Rack X570D4U

Pros:​

Has two PCI-E x8 (or longer) slots fitting an HBA and and SFP+ card.
More powerful than option 1 in multi-threaded jobs, similar in single-threaded.
Plenty (8) of onboard SATA ports combined with 2x m.2 slots.

Cons:​

Ryzen support in CORE and SCALE does not appear to be as mature as that of Intel. A common tip is to turn C-states off because systems are locking up while stepping between power states. Kernel 5.15 might help, but the changelog does not suggest anything related to idle crashes.
Uncertain ECC-support. Supposedly 1-bit corrections are handled but reporting is not supported.

Option 3:​

Intel E-1240 v6
Supermicro X11SSM-F

Pros:​

Has two PCI-E x8 (or longer) slots fitting an HBA and and SFP+ card.

Cons:​

Older and less powerful, not sure if it can handle higher iSCSI transfers and maybe higher power use? There are other choices but these are fairly pricy on Ebay.
No m.2 for SLOG.
Only 1-bit ECC support.

Option 4:​

D-1518
X10SDV-4C+-TP4F

Pros:​

Low power use.
Great ECC support.

Cons:​

Is it capable of handling 10GbE iSCSI and VMs?

Option 5:​

D-1537
X10SDV-7TP4F

Pros:​

Low power use.
Great ECC support.
Has an onboard LSI 2116 controller for plenty of SAS2 ports, but it might be buggy?

Cons:​

More powerful than the D-1518 but is it enough?
Pricy.

Option 6:​

Intel Atom C3958
A2SDi-H-TP4F

Pros:​

Interesting board with very low power use despite plenty of cores.

Cons:​

Limited single-threaded performance, so probably not ideal for 10GbE transfers and similar.
Limited PCI-E so stuck with onboard hardware for SAS/SATA and 10GbE.
Pricy.


Dilemma:​

I don't know what kind of hardware is needed to get 500-1000MB transfer speeds with iSCSI.
Am I attaching too much value to having a Chelsio NIC?
Likewise, I am not sure how impactful multi-bit ECC support is.

Any guidance would be most welcome.
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
I'm looking at getting a new system that will fill the function of NAS and mini-server in my home.
For a home build, I would say that cost, noise, and efficiency are a lot more important than reliability or robustness. In general, I say that because most home users can down their server for a few days and be fine. Not so much in the enterprise world.

With that in mind, buying brand new hardware becomes more of a luxury purchase, since used last-gen equipment would be more than enough for your likely use case. And that means it's really a lot more about what you want, than what your use case is.

Don't get too caught up on power use. Anything fairly recent will idle down to very low power. As a simple example, my old gaming PC w/ an i7-2700k and GTX580 idled around 400W, and spiked to 600W. My new gaming PC w/ an i7-11700k and RTX3080 idles around 150W and spikes to 700W.

Low-power stuff is really only needed when you're aiming for under 100W idle.

Does not appear to support ECC fully since multi-bit detection is not listed in the specs. Is 1-bit memory correction seen as adequate?
ECC generally is 1-bit memory error detection/correction. Most systems are capable of multi-bit detection (pretty common). If an Intel Xeon processor and Supermicro server-class motherboard in 2022 did not fully support ECC, I'd be seriously amazed. ECC is old-hat technology at this point, and extremely standard at these class points.

However, multi-bit correction is a much harder problem, and only just now coming to market. This requires hyper-specialized memory, motherboards, and processors. Generally, you're only going to get this on a pre-built systems with multiple TB of RAM; to my knowledge, it hasn't made it to the DIY space yet.

It sounds like you're overthinking this one.

SLOGIntel Optane M10 SSD M.2 2280 32GB
If the main use of your HDDs is for media, then you probably don't need a SLOG. A SLOG only helps for writes, and can't imagine that you're writing to your HDD pool that frequently that a SLOG will help.

A mirrored SSD pool will likely be plenty fast enough that a SLOG won't particularly help.

Ryzen support in CORE and SCALE does not appear to be as mature as that of Intel.
100% yes. Based on your budget and goals here, I'd stick to Intel.

I don't know what kind of hardware is needed to get 500-1000MB transfer speeds with iSCSI.
A full 1GB/s of transfer speed is difficult, because it requires the underlying storage media to support that as well. And that pretty much requires SSDs. If you have decent enough SSDs for your mirrored vdev, then either option 1 or 3 should be able to get you there.

At the same time, there are so many other things that you need in order to actually take advantage of that kind of speed. Whatever client is on the other end of the iSCSI connection must be able to suck in or spit out data that quickly, and that also requires pretty specialized hardware.
 

neofusion

Contributor
Joined
Apr 2, 2022
Messages
159
For a home build, I would say that cost, noise, and efficiency are a lot more important than reliability or robustness. In general, I say that because most home users can down their server for a few days and be fine. Not so much in the enterprise world.

With that in mind, buying brand new hardware becomes more of a luxury purchase, since used last-gen equipment would be more than enough for your likely use case. And that means it's really a lot more about what you want, than what your use case is.

Don't get too caught up on power use. Anything fairly recent will idle down to very low power. As a simple example, my old gaming PC w/ an i7-2700k and GTX580 idled around 400W, and spiked to 600W. My new gaming PC w/ an i7-11700k and RTX3080 idles around 150W and spikes to 700W.

Low-power stuff is really only needed when you're aiming for under 100W idle.
Thank you for replying!

Cost is certainly a factor for me and as you say another important goal is to keep it cool and hopefully not too loud.
I am trying to avoid the new but poorly optimised parts, in addition to specific older parts that simply have a high idle W-floor.

It will be interesting to see how my choice of discreet SAS and SFP+ adapters will influence a 100 - 120-ish W idle wattage goal.

ECC generally is 1-bit memory error detection/correction. Most systems are capable of multi-bit detection (pretty common). If an Intel Xeon processor and Supermicro server-class motherboard in 2022 did not fully support ECC, I'd be seriously amazed. ECC is old-hat technology at this point, and extremely standard at these class points.

However, multi-bit correction is a much harder problem, and only just now coming to market. This requires hyper-specialized memory, motherboards, and processors. Generally, you're only going to get this on a pre-built systems with multiple TB of RAM; to my knowledge, it hasn't made it to the DIY space yet.

It sounds like you're overthinking this one.
I suspected I might be, since the X11SCH-F is a card that shows up in the Hardware guide but appeared to clash with the "ECC is strongly recommended"-mantra.

Glad to know that 1-bit correction is the most important aspect of ECC for my use-case.

If the main use of your HDDs is for media, then you probably don't need a SLOG. A SLOG only helps for writes, and can't imagine that you're writing to your HDD pool that frequently that a SLOG will help.

A mirrored SSD pool will likely be plenty fast enough that a SLOG won't particularly help.
I neglected to mention that a primary client is Mac-based.
I will try it without a SLOG first and see.

100% yes. Based on your budget and goals here, I'd stick to Intel.
I must confess I really wanted to get something AMD-based - For a while I was even sizing up a low-end EPYC build.

In the end I've discarded that idea partially due to high idle power consumption and slightly higher initial setup costs. That combined with the the Ryzen instabilities at idle turned me off AMD, for now at least.

A full 1GB/s of transfer speed is difficult, because it requires the underlying storage media to support that as well. And that pretty much requires SSDs. If you have decent enough SSDs for your mirrored vdev, then either option 1 or 3 should be able to get you there.

At the same time, there are so many other things that you need in order to actually take advantage of that kind of speed. Whatever client is on the other end of the iSCSI connection must be able to suck in or spit out data that quickly, and that also requires pretty specialized hardware.
I am not expecting line speed SFP+, but over 500MB/s for my common usage scenarios would be most welcome. No doubt I will need to tinker to get there.
 
Top