First TrueNAS build - plerase sanity check part list

kqmaverick

Cadet
Joined
Aug 27, 2022
Messages
4
Getting ready to put together my first TrueNAS Core build and would like a sanity check on my hardware for any red flags. Trying to stay as low budget as possible but don't want to make any stupid decisions.

Expected Usage:
Plex (no real time transcoding)
Nextcloud
qbitorrent
Adguard home

Build:
CPU: Intel Xeon E-2124G 3.4 GHz Quad-Core Processor
CPU Cooler: Noctua NH-U12S redux 70.75 CFM CPU Cooler
Motherboard: Gigabyte C246-WU4 ATX LGA1151 Motherboard
Memory: Kingston ValueRAM 32 GB (2 x 16 GB) DDR4-2133 CL15 Memory ECC / Unbuffered
Storage: Intel 530 80 GB M.2-2280 Solid State Drive
Storage: 8 x Seagate Exos X18 18 TB 3.5" 7200RPM Internal Hard Drive
Case: Fractal Design Meshify 2 XL ATX Full Tower Case
Power Supply: Corsair HX750 Platinum 750 W 80+ Platinum Certified Fully Modular ATX Power Supply

Network Adapter, not sure which way to go. The Asus is a huge discount but from what I could find on here it's not 100% natively supported. If its workable I like to save the $300 over the Intel adapter.
Wired Network Adapter: Asus XG-C100C 10 Gb/s Ethernet PCIe x4 Network Adapter
Wired Network Adapter: Intel X550-T2 2 x 10 Gb/s Ethernet PCIe x4 Network Adapter
 

homer27081990

Patron
Joined
Aug 9, 2022
Messages
321

Please read the ZFS primer before buying anything.

(Warning: supplied links are reference examples, not proposed hardware)

GENERAL​

At that many (and that expensive) drives, maybe you want to consider a rack-mount case with redundant 1U PSUs and hot-plugable caddy drive bays, like this. There are a lot cheaper options if you dig a little.
Your build also doesn't have a separate SATA/SAS controller. With your setup, a controller is a must.
Your motherboard seems really expensive (and power-hungry, maybe?) for a server setup. It is a workstation mobo, not a server one.
The reasons I am against this are not because of stubbornness or a lack of imagination. Let me elaborate:

MAIN ISSUE OF THAT MOTHERBOARD AND SYSTEM LAYOUT/TOPOLOGY

Everything sits on a single point of failure

The idea here (storage server use-case) is to spread cost as much as you can and make sure:​

A: If/when your motherboard fails:​

1. It doesn't take down any drives with it (co and pro -sumer motherboards can have silent failures that trash other things on them)
2. It doesn't take a fortune to replace it
3. It doesn't have the potential to cause data loss (a crisped chipset could do a lot of crazy things to a ZFS array) (way rarer failure, though)
4. It doesn't cause the whole array to lose power (an ATX PSU controlled by the motherboard will unceremoniously cut power everywhere)
5. There is a separate system on the motherboard that can give a detailed report as to what is up and what is down and why
6. … etc.

B: When a drive fails catastrophically:

1. It doesn't have the potential to take any critical systems down with it (an overvoltage to the chipset, for example, or worse, the CPU)
2. The system will not halt because of "hardware change panic" (eg, if you remove internal storage from a PC, it will BSOD, even if it is not used)
3. You can immediately replace it and start resilvering
4. The system will not hang, letting the failed drive keep being in an "ON" state, worsening the problem. Also, in such a case, the rest of the drives keep "working" on an unresponsive system, connected to a freaked-out controller. That's a no-no.
5. … etc.

C: When the (or a) storage controller fails:

1. It won't take other critical systems with it
2. It won't cause a hard shutdown
3. It won't have the potential to misidentify any drives to the OS and cause any miswrites (rare)
4. It doesn't have the potential to cause a cold reboot loop, and "spread" electrical damage to all connected systems (CPU, chipset, drives, motherboard...)
5. … etc.

D: When a PSU fails

1. It isn't the only PSU
2. It doesn't have a direct electrical path (through common voltage regulators, for example), through the same rail, to CPU, controller, drives...
3. It cannot overvolt
4. It cannot undervolt to the rails (supplied power) and keep working with an internal short
5. It will not supply power with noise (capacitor failure)
6. It will not supply unfiltered power
7. It will not directly translate a line overvoltage (critical or not) to a supplied power analogous overvoltage
8-9. It can never, never, pass line current to the transformed rails, even for fractions of a second
8-9. It has not faulty grounding, device side (been there... tried to touch the tower to unscrew the side panel. I threw the whole thing, next)

HOW A SERVER SYSTEM LAYOUT ADDRESSES THESE PROBLEMS​

1. There are redundancies​

  • A PSU that even slightly fails shuts down by default with another taking over
  • SAS drives can connect to different controllers at the same time for redundancy (enterprise grade stuff, but can be done DIY)
  • The system's real power control is separate from the OS in most server-grade boards

2. There is a separation of concerns​

  • Most components are made to do only what they are supposed to be doing by definition
  • Things are not "crammed" together
  • One thing breaking has little chance of breaking other things
  • One thing breaking has little chance of stopping critical systems
  • Common electrical power pathways are minimized
  • (Enterprise-grade) Power delivery from the PSUs to the system is handled by an intermediate power regulator, greatly reducing the chance of a "PSU frenzy" type catastrophe

3. Server-grade stuff is designed for server use​

  • Hot-pluggable drives (and hot-displuggable, after being exported from ZFS, of course)
  • Fully functional controllers that are designed to handle failing drives routinely
  • Management subsystem, separate from the CPU, that can monitor the health of the system and is IP accessible
  • Enormous capacity for topological changes
  • No server-grade components "play hero". While con and pro -sumer grade components (disks, motherboards, controllers, etc.) will try to keep working as long as possible while failing, with little to no warning, until they fail spectacularly, server-grade components will report any failure to both the management (IPMI) and the OS with detail and will take the necessary steps on their own (from marking channels as "bad" on a SAS controller to a PSU being powered off upon detection of an inconsistency)

4. Minimization of operating costs​

  • Server-grade components consume way less power than con and pro -sumer ones. Especially prosumer ones.
  • There are no extraneous or unneeded fanfares on server grade components, like, say, "the best audio chip with integrated amp ever!" or eSATA ports or a ton of USB controllers or integrated graphics that cannot be totally deactivated or... consuming power without reason
  • Cooling is efficient and effective
  • Counterintuitively, server grade components require way less maintenance (as cost) than standard ones
  • Your time is money, so, spending a lot less time fiddling with the hardware means more time configuring your services

5. Minimization of maintenance costs​

  • Because of the separation of concerns, individual failing components cost way less to replace
  • You won't ever have to throw away (or be forced to re-buy) a whole host of subsystems (that motherboard) because of one failing subsystem. Imagine bad SATA ports/channels on the motherboard, for instance. Or a bad controller. Do you keep trusting that board? Or do you turn it into a home cinema appliance? It is not by accident that most enterprise grade servers do not have SATA ports integrated on their motherboards.
  • Because of the separation of concerns, even integrated systems on a server grade mainboard are being worn out way slower because they are not being used as hard (complementary SATA controller for the boot drive and the USB ports)
  • You can get great performance even with baseline components.
  • You can get away with lower-price intermediate components, if a component fails, because replacing them becomes trivial
  • Most failures and problems even, you will face have been seen before by others on really similar systems. Time is money, remember?
  • Less downtime on failures
  • Way more ways to deal with worst case scenarios
  • You can always get a spare lower quality controller (100$?) just in case. You cannot get a spare workstation motherboard like that! And, if you can, maybe just buy a premade server :wink:

6. Greater trust in your system, more breathing room to enjoy using and configuring it!​

THE CASE AGAINST A SERVER GRADE SYSTEM​

  • You just want to learn how to setup a server, might turn it into a CAD workstation tomorrow...
  • You think about turning the system to a server/pc (with a Windows VM and a GPU, perhaps ?)
  • You will (for sure) constantly be changing hardware topologies and trying new things. Data doesn't matter that much to me, learning does.
  • You are expecting to come onto a lot of money and want this temporarily. It will be turned to a PC with a Cort i9 extremofile and a RXX 9040 in a few months! (in that case, maybe not rough that motherboard up by using it to run an almost 200 TB storage server and keep it in the box? It will be a pity if it suffered any wear) (fake product names are deliberate :smile:)
  • Personal reasons / preferences
  • (For whatever reason) You feel that server grade systems are either beyond what you can handle or you feel you need training to handle such equipment (not the case for anyone, though, its a perception created by the fact that you do not interact with such systems every day. A makeshift PC-to-Server takes a lot more skill and knowledge to keep operational than a purpose-made server)
  • You think that in the end you will spend more money for a server-grade system than for a standard one (only if you try to buy a vendor made one, they currently start from 3500$ at least, I think? Those who know, please inform)
  • You don't have time to learn new things, you know PCs already, so...
  • You are not buying it yourself/alone and the others have strong, rooted opinions about the subject
  • You just can't have that thing making all that noise in the house! (mostly the ready-made-vendor servers destined for datacenters make so much noise, you can make a rackmount server your self that has silent fans)
  • You don't have the space for a rack (in that case, there are lots of great tower cases, like my The Tower snow edition, that take however many drives you want and even have toolless HDD trays)

FINAL THOUGHTS​

1. Don't rush, search all options and paths. This is a significant investment of money, time and skill
2. Don't hurry to buy all the drives up-front, except if you already know for sure you need them. After all, HDDs get cheaper over time. Better to build a great system and expand your array later than buying all the drives and placing them on a mediocre system that in a few months time, will become a black hole for your time, money and calm.
3. There is no totally right path, but, some paths are way better than others.
4. Keep in mind: a server build takes planning. The fun is not in building it (like building a PC for instance) but in using it without remembering it even exists. Seriously building a server is work.
5. Don't be confined to mainstream suppliers, search server-specific suppliers and compare prices from many of them.
6. Don't just examine your use case. The main factor for how much of a server you need (in storage) is not the use case but the storage medium. Your choice of drives determines everything else. 144 TB of storage, for example, in ZFS require more RAM. Waay more RAM. Any experienced ZFS users reading please consult on this (I can offer no relevant experience). You also need (not would like, need) 3-4 SSDs for caching, ZIL and SLOG.
Please read the ZFS primer before buying anything.

As a beginner in the server space myself, have fun! I know I do.
 
Last edited:

kqmaverick

Cadet
Joined
Aug 27, 2022
Messages
4
homer27081990 thank you for the detailed write up. I have looked at some enterprise options but what I found always had a much higher price premium even used. The system I listed above without the spinning disks I can put together for $1283.39. I have looked at the cache drives you mentioned but from what I read both SLOG and L2ARC would have little to no performance benefit for my usage case (mainly Plex). For the HBA I know I will need one if I ever decide to expand and add another VDEV as my MOBO SATA ports are all going to be used but was not sure if I needed that expense today. The ability to hot swap is not important to me as this system is for my personal use and I am fine shutting down and swapping out a disk if needed. I did read to never use a pcie sata expansion card when researching my build but never saw the use of the MOBOs sata ports being inherently dangerous notwithstanding the inability to hot swap and loss of redundancy which again I am fine shutting down and replacing any failed components. My only concern with my build is data redundancy (I plan to go RAIDZ2) and that videos streamed via Plex don't buffer. I also wanted the option to add another Vdev in the future so I picked a case that support 16 drives and would get an HBA at that time. I have been looking at an LSI 9300-16i from Art of the Server on ebay and could buy it on day one for initial setup if I should avoid the onboard sata controller but do not see the need for the failover capability for PSU/controller with my use case. Again if I am looking at this wrong let me know but my thinking right now is that if my PSU fails and I need to keep the system shutdown for a few days while I get a new one and install it I am fine with that.
 

homer27081990

Patron
Joined
Aug 9, 2022
Messages
321
It's not about the use case. It's about ZFS itself and how much spinning those drive will have to do in order to cope. Please read the Primer. 8 * 18TB drives and the notion to upgrade to 16 is incompatible with your build, mainly because of RAM. Also, the general wisdom to not use PCIe cards applies to PC and workstation (sometimes) hardware, not server hardware. We are not talking about the same "cards" here.
Also, specific server use cases avoid placing an intermediary between drives and CPU, specifically those worrying about latency and T-rates.
The point of the failing PSU is not if you will experience down-time. The point is that an uncontrollable loss of power on an array like the one you are planning to build, with the possibility of a drive suffering a failure at the same time is quite dangerous even for ZFS arrays.
Anyway, I rest my case, but as a last thought, you need careful planning to handle the cooling of 16 8TB 7200rpm, 10W a piece drives in that case.
Please make sure you have downloaded and read this, from page 10. If you are not going to have even close to the generally accepted 'goldilocks' ratio of 1:1000 for RAM, you need to have SSDs for ZIL, cache and SLOG. I don't know how much you would need to tune this system without any acceleration if you expect it to perform.
 
Last edited:

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
CPU: Intel Xeon E-2124G 3.4 GHz Quad-Core Processor
CPU Cooler: Noctua NH-U12S redux 70.75 CFM CPU Cooler
Motherboard: Gigabyte C246-WU4 ATX LGA1151 Motherboard
Memory: Kingston ValueRAM 32 GB (2 x 16 GB) DDR4-2133 CL15 Memory ECC / Unbuffered
Storage: Intel 530 80 GB M.2-2280 Solid State Drive
Storage: 8 x Seagate Exos X18 18 TB 3.5" 7200RPM Internal Hard Drive
Case: Fractal Design Meshify 2 XL ATX Full Tower Case
Power Supply: Corsair HX750 Platinum 750 W 80+ Platinum Certified Fully Modular ATX Power Supply

Is there a particular reason you chose a "workstation" style motherboard? You didn't mention any GPU-leveraged transcoding as a requirement, so a board with a greater number of narrower PCIe slots (or perhaps just ones that aren't shared with each other or the M.2 slot) might be a better solution.

I see KVR21N15D8/16 on the motherboard QVL as well so the RAM should be fine. Good call on going with the largest sticks available per slot, that leaves you room if you choose to add more down the road.

There might be a potential hitch with the choice of the Intel 530 as a boot device though. See below re: the SATA ports and M.2 sharing. You may want to nab a cheap NVMe one instead.

Looking at the layout of the Fractal case there with it fully loaded with 16 drives, I'd also want you to ensure you've got all four of the front fan spots populated. The image seems to imply that there's good spacing between the drives so you might not need high-pressure (and therefore higher-noise) fans up top, but the lowest four drives are much cozier. Keep an eye on them for sure.

Thumbnail below - click to expand.

fractal_meshify_2_xl_fully_loaded.jpg

I have looked at the cache drives you mentioned but from what I read both SLOG and L2ARC would have little to no performance benefit for my usage case (mainly Plex).

Correct. None of your workloads will be sending synchronous writes (SLOG) and it's unlikely you're going to have a set of "hot data" for L2ARC outside of the NextCloud files - unless you expect that the same show will be watched and re-watched via Plex over and over. (I'll admit this is a possibility if you have young children though.) You'll probably get the most benefit from using recordsize=1M on the Plex dataset and tweaking the ZFS speculative prefetcher - try adjusting the vfs.zfs.zfetch.max_distance tunable to something beyond the default 8M, make it 32M or even 64M - this would likely benefit things like multiple simultaneous Plex streams hitting the spinning disks.

I did read to never use a pcie sata expansion card when researching my build but never saw the use of the MOBOs sata ports being inherently dangerous notwithstanding the inability to hot swap and loss of redundancy which again I am fine shutting down and replacing any failed components.

The motherboard SATA ports coming off the Intel C246 chipset are fine - I would probably caution against using the "GSATA3" labeled ports for your data pool as they come off of a different chipset (ASMedia ASM1061) - while it's not a bad chipset per se, it's not as good as the Intel. Better to eliminate that as a potential pain point.

Also noted is that on that board the M.2 "A" port shares SATA wiring with the SATA3_0 port - so using an NVMe M.2 device to boot from (or an M.2 to SATA / M.2 to USB adapter) would be preferable here.

It's not about the use case.
On the contrary, it's always about the use case; a system intended to share big media files via Plex will have a drastically different set of specifications than one that needs to handle low-latency I/O from hypervisors.

You also need (not would like, need) 3-4 SSDs for caching, ZIL and SLOG.
If you are not going to have even close to the generally accepted 'goldilocks' ratio of 1:1000 for RAM, you need to have SSDs for ZIL, cache and SLOG.

@kqmaverick is unlikely to be able to derive any benefit from "ZIL" or SLOG as I don't expect any of the use cases to be requesting synchronous writes. L2ARC might serve some purpose if it was constrained to the Nextcloud dataset alone, and/or allowed to hold any metadata overflow from the others, but attempting to use SSDs to cache the data portion of a Plex workload is a recipe for unnecessarily burning through P/E cycles.
 

homer27081990

Patron
Joined
Aug 9, 2022
Messages
321
Hello! Seems I need to expand my reading beyond the "safe" literature (that perhaps examines use cases intended for tens of users). Only one question (because my system of similar specifications, except for storage volume 64GB - 12 cores in 2 sockets): am I covered If more people start using the system down the road, or I need to run additional services on the system (or both, most likely)? At what point (or which case) will ZFS start "chocking" the hardware?
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Hello! Seems I need to expand my reading beyond the "safe" literature (that perhaps examines use cases intended for tens of users). Only one question (because my system of similar specifications, except for storage volume 64GB - 12 cores in 2 sockets): am I covered If more people start using the system down the road, or I need to run additional services on the system (or both, most likely)? At what point (or which case) will ZFS start "chocking" the hardware?
If this is a home server you should be ok, you have both the RAM and the CPU to handle quite a few users.
Network Speed might became an issue if you are using cat 5e though.
 
Last edited:

homer27081990

Patron
Joined
Aug 9, 2022
Messages
321
If this is a home server you should be ok, you have both the RAM and the CPU to handle quite a few users.
Network Speed might became an issue if you are using cat 5e though.
Thanks! Network speed is not the issue, (if I understand what you mean correctly) because I am situated in a 1Gbit network and have the capacity to upgrade (server to switch) to 10Gbit. WAN side, I get 100Mbit for the moment, but 1Gbit fiber is 1 to 2 years away.

What worries me is what happens when you search for a file (my servers holds a ~300K file medical datastore on it and is connected via IPsec site-to-site to the office/endoscopy clinic) at the same time someone is uploading to Nextcloud and another watches a movie on plex.

From what I understand about ZFS (as per the documentation) the amount of RAM you need is analogous to the size of your pool, up to a certain degree (we are not even talking about deduplication). My question was at what point does RAM become an issue?
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
Looking at the layout of the Fractal case there with it fully loaded with 16 drives, I'd also want you to ensure you've got all four of the front fan spots populated. The image seems to imply that there's good spacing between the drives so you might not need high-pressure (and therefore higher-noise) fans up top, but the lowest four drives are much cozier. Keep an eye on them for sure.
The challenge with Fractal's cases are the drive mounts in combination with their positioning.

The bottom of the drive is covered, which in itself is similar to usual rack-mount cases. But since the drives are mounted such that the airflow comes from the side (and not the front, like in normal server cases) there is less airflow (roughly half) of what the drives would get in a rack-mount case.

I like Fractal's cases, but if you are after a silent multi-disk server case, I would look somewhere else. Depending on personal taste for HDD temperature, the OP will likely need high-pressure fans. For "reference", to keep my 8 disks under 40 degrees Celsius in the basement, I run 2 Noctua "NF-A14 industrialPPC -3000 PWM" and they are loud as hell. My old 1U Supermicro rack-mount servers (X9 generation) are much quieter under low to medium load.

Again, this is not to say that the Fractal case is a bad choice. But cooling that many drives, when they are packed so densely, is a challenge.

Personally, I have been looking for an alternative on and off, but never really found one. So if someone can recommend a sort-of quiet case for 8 drives, I would be thrilled :smile: .
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Thanks! Network speed is not the issue, (if I understand what you mean correctly) because I am situated in a 1Gbit network and have the capacity to upgrade (server to switch) to 10Gbit. WAN side, I get 100Mbit for the moment, but 1Gbit fiber is 1 to 2 years away.

What worries me is what happens when you search for a file (my servers holds a ~300K file medical datastore on it and is connected via IPsec site-to-site to the office/endoscopy clinic) at the same time someone is uploading to Nextcloud and another watches a movie on plex.

From what I understand about ZFS (as per the documentation) the amount of RAM you need is analogous to the size of your pool, up to a certain degree (we are not even talking about deduplication). My question was at what point does RAM become an issue?
I was talking about your local network (LAN) speed. If you have more than one user simultaneosly accessing the NAS, you will saturate your cables (the ones directly commected to your nas) quickly if you are using 1Gbit hardware. Imho that 10Gbit nas-switch upgrade is mandatory if you want maximum performance regardless the number of simultaneos requests.

In ZFS more RAM is always good, but there is no "right" ratio. Iirc in the ZFS intro or in the hardware guide there is a suggested ratio. I believe 64GB to be plenty for a standard home server.
TrueNAS minimum is 16GB.
 
Last edited:

awasb

Patron
Joined
Jan 11, 2021
Messages
415
[...]

Personally, I have been looking for an alternative on and off, but never really found one. So if someone can recommend a sort-of quiet case for 8 drives, I would be thrilled :smile: .

Sorry for offtopic posting, but You asked for it. :wink:

For 3.5" (up to 8 and eventually 10, but those extra 2 wouldn't be hit by direct air flow): Fractal Design R5.

Had that case for 4 years. It was very nice, quite big, though. But I switched to 2.5", since the power draw was way to high for my applications (mainly 24/7 backup).

For 2.5" (up to 10 and eventually 16, but those extra 6 wouldn't be hit by direct air flow): LianLi PC-Q25 (or M25 for more PCI slots).

Both virtually silent (I know, "silence", sound and frequencies are highly subjective categories). For me it were just the drives humming with the case right beneath my desk.

After experimenting with a SilverStone CS-280 (in my opinion not that good for spinning rust due to bad air flow, but excellent for all flash) I am running the Q25 here @home and the 10 2.5" drives within barely hit 38 degrees Celsius. Base temp idle is 32-34 degrees Celsius. Absolute peak tests - replicating using plzip compression while scrubbing while "smashing" the NAS with concurrent backups via SMB and NFS - raised the disk temps to 43-45 degrees Celsius in hot summer. (They drop back to 36-38 once the scrub is done.)
140mm Front fan @ 800rpm, 120mm top fan @ 700 rpm, 92mm cpu fan 1000 rpm, 92mm psu exhaust fan in auto mode (never audibly spins up).

Beware: No matter what case i used, I never kept the factory fans and always switched to beQuiet silentwings or noctuas.
 
Last edited:

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Hello! Seems I need to expand my reading beyond the "safe" literature (that perhaps examines use cases intended for tens of users). Only one question (because my system of similar specifications, except for storage volume 64GB - 12 cores in 2 sockets): am I covered If more people start using the system down the road, or I need to run additional services on the system (or both, most likely)? At what point (or which case) will ZFS start "chocking" the hardware?

Please start a new thread describing your use case/hardware so that we avoid hijacking this one from the OP.

Network Adapter, not sure which way to go. The Asus is a huge discount but from what I could find on here it's not 100% natively supported. If its workable I like to save the $300 over the Intel adapter.
Wired Network Adapter: Asus XG-C100C 10 Gb/s Ethernet PCIe x4 Network Adapter
Wired Network Adapter: Intel X550-T2 2 x 10 Gb/s Ethernet PCIe x4 Network Adapter

This appears to have an early/experimental driver included, but it isn't loaded by default. Allegedly all that's necessary is to go under System - Tunables, and add a "Loader" type tunable named if_atlantic_load with a value of YES. Reboot the system and it should work.

That being said, look for used Chelsio cards as well. Used Intel cards are a bit of a minefield as they're often counterfeited, but buying an OEM branded one (Cisco/HP/Dell/etc) as a "used pull" from a datacenter liquidator is probably a safe bet and will save a fair bit of money.
 
Top