Hardware Validation for My First TrueNAS Build: Aiming for a Compact, Quiet, High-Capacity, Reliable System

j1n37

Cadet
Joined
Apr 10, 2023
Messages
7
Hi TrueNAS Community,

I am planning a new TrueNAS build and would appreciate any feedback or suggestions regarding my current component selection. My primary goal is to create a reliable, high-capacity storage solution using the X10SDV-4C+-TLN2F motherboard that I already have, keeping the system as small and quiet as possible. I will be using it primarily for storage and backups, and potentially as persistent Longhorn storage for containers with high-storage, low-performance requirements on a high availability (HA) K3S cluster powered by a similar server with faster, smaller storage running on Proxmox, and two smaller servers also running Proxmox, 4 RockPi 5Bs running DietPi w/K3S. They are all connected together on the same VLAN on a 10G unmanaged switch, but the smaller servers only have 2.5gbe ports (which the switch supports).

Here's the planned configuration:

I have a few specific questions:

  1. Are there any compatibility issues or bottlenecks that I should be aware of?
  2. Are there any other components or optimizations that you'd recommend for improving the performance or reliability of the build?
  3. Given my use case, would TrueNAS SCALE or TrueNAS Core be a better choice? I am leaning towards SCALE because of its support for virtualization and containerization, which might be a better fit within my current ecosystem.
  4. I'm currently using Backblaze for backups, but I think this will become prohibitively expensive. Can I build a similar build and use that to backup my NAS at my parent's house two states away? My parents are supportive of me running server equipment at their place, as I provide tech support, create cool things for them, and host various services they use. I will probably still use Backblaze to backup my most important data.
Any input or advice would be greatly appreciated. Thank you in advance for your help!

Best regards,
j1n37
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Longhorn? The thing that would later become Windows Vista?

RAM: 128GB ECC RDIMM (4x 32GB Supermicro DDR4 2666 MEM-DR432LC-ER26, from their QVL)
Seems a little on the side of "excessive", but the step down to 64 GB is significant. It's not bad in any way and if your budget is fine with it, fine. That said, since you're looking at 32 GB DIMMs, you might want to try starting with just 64 GB, since you won't actually lose any memory bandwidth. If needed, you can later upgrade easily.
L2ARC: Samsung 970 EVO Plus 2TB NVMe SSD https://www.amazon.com/dp/B07MFZXR1B
This might be a problem. Your workload is a little vague, so it's hard to say for sure, but L2ARC is only beneficial if your working set is too large for RAM but small enough to be meaningfully stored on L2ARC.
I would start at 64 GB of RAM and see if performance is acceptable. If it isn't and if the ARC hit rate is low but the ARC deadlist hit rate is high, then you can upgrade to 128 GB of RAM. If that's still not enough and the previous conditions are still met, then you can look at L2ARC - but 2 TB may be excessive.
RAID1 Boot: 2x Samsung 960 EVO 250GB PCIe NVMe https://www.amazon.com/dp/B01LYFKX41 (using AOC-SLG3-2M2 add-on card https://www.amazon.com/dp/B071S3ZY8P)
This seems a little excessive, too. It won't hurt anything, other than your wallet, but even a single reputable SSD is plenty for a domestic scenario. You do lose the PCIe slot to a pair of overkill SSDs, however.
 

j1n37

Cadet
Joined
Apr 10, 2023
Messages
7
Longhorn? The thing that would later become Windows Vista?


Seems a little on the side of "excessive", but the step down to 64 GB is significant. It's not bad in any way and if your budget is fine with it, fine. That said, since you're looking at 32 GB DIMMs, you might want to try starting with just 64 GB, since you won't actually lose any memory bandwidth. If needed, you can later upgrade easily.

This might be a problem. Your workload is a little vague, so it's hard to say for sure, but L2ARC is only beneficial if your working set is too large for RAM but small enough to be meaningfully stored on L2ARC.
I would start at 64 GB of RAM and see if performance is acceptable. If it isn't and if the ARC hit rate is low but the ARC deadlist hit rate is high, then you can upgrade to 128 GB of RAM. If that's still not enough and the previous conditions are still met, then you can look at L2ARC - but 2 TB may be excessive.

This seems a little excessive, too. It won't hurt anything, other than your wallet, but even a single reputable SSD is plenty for a domestic scenario. You do lose the PCIe slot to a pair of overkill SSDs, however.

Thanks for the quick and helpful response.

Regarding Longhorn, I was referring to the distributed block storage for Kubernetes, not the Windows Vista predecessor. I realize it's a bit out of scope for the TrueNAS community, but I wanted to give some insights on how I'd be using my NAS.

As for the RAM, I'll start with the 128 GB since I already have it from another server that I gutted. I'll keep your advice in mind and consider adding L2ARC later on if I encounter the issues you mentioned.

I don't anticipate needing the PCIe slot for anything else (it's funny to say that, especially with a mITX board) besides perhaps adding SFP+ instead of using the built-in 10G RJ45 NICs. My other option was to use the PCIe slot to add more SATA ports, which would justify getting the Silverstone CS381 https://www.amazon.com/dp/B09ZNKVF2N with maybe 8 HDDs for a bit more space, boot to mirrored SATA SSDs, and possibly include one last set of 2.5" SATA mirrored storage SSDs for faster NAS storage. It might seem overly cautious, but I do feel better having redundancy everywhere.

I was seriously considering the Silverstone CS381 before, but it seemed a little pricey. My build is already on the expensive side, so maybe it makes sense to just go all out and invest in the best possible setup?
 

j1n37

Cadet
Joined
Apr 10, 2023
Messages
7
Side note, I also thought that the recommendation for ZFS was to have at least 1GB of RAM for every TB of storage. With my initial setup, that 64 GB would probably be just fine, as you said. With the extra 2 HDD, I think I'd have to do 128, right?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Well, it's a rule of thumb and deliberately kept vague. For instance, at work I have a server arriving soon that'll have 2x 12x 10 TB HDDs in RAIDZ2, for some 180 TB available (minus 20% free space) and it will be running 128 GB of DRAM. I can get away with it because I deal mostly with large files and the server is not in a performance-critical path.
 

j1n37

Cadet
Joined
Apr 10, 2023
Messages
7
After careful consideration, I am planning to proceed with the following TrueNAS build:
I am unsure whether to place 2 of the fast storage drives in the two extra hot-swap spots or the two boot drives. I like the idea of having the boot drives there since there are only two, and it feels more organized to use them both for the same purpose. However, the fast storage drives might be more practical in those spots.

Regarding the special device, I am still trying to understand its purpose. Based on this post: https://forum.level1techs.com/t/zfs-metadata-special-device-z/159954, it might be more useful than an L2ARC or SLOG device. The advantage of this setup is that I am only using 8 lanes, and I can potentially use something like https://www.amazon.com/dp/B0BHNPKCL5 to split my x16 into dual x8 slots if necessary, allowing for future upgrades to SFP+ or adding L2ARC or SLOG.

I truly appreciate your assistance with this build. As someone who primarily works with high-level programming, diving into hardware specifics like this is beyond my usual area of expertise. Thank you in advance for your valuable guidance!
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Regarding the special device, I am still trying to understand its purpose.
The idea is to store metadata and small blocks in a dedicated, fast vdev, while large blocks remain on slower storage, allowing both to shine in their respective fields (SSDs with IOPS; HDDs with long, sequential I/O). The major catch is that the vdev must be reliable, just like the rest of the pool, and unlike L2ARC and SLOG.
The advantage of this setup is that I am only using 8 lanes, and I can potentially use something like https://www.amazon.com/dp/B0BHNPKCL5 to split my x16 into dual x8 slots if necessary, allowing for future upgrades to SFP+ or adding L2ARC or SLOG.
I peeked down that rabbit hole a few months back. There's a guy who designs just the right adapters to go from PCIe x16 FP to 2x PCIe x8 LP (in addition to other crazy options, like adding M.2 slots, three LP slots, etc.). Expensive, but so ridiculous that I kinda want to try someday for the fun of it. Try searching my post history around late 2022 or so.
Can I build a similar build and use that to backup my NAS at my parent's house two states away? My parents are supportive of me running server equipment at their place, as I provide tech support, create cool things for them, and host various services they use. I will probably still use Backblaze to backup my most important data.
By the way, forgot to answer this one: The answer is yes! Just need to figure out the networking, but fundamentally a sound concept.
 

j1n37

Cadet
Joined
Apr 10, 2023
Messages
7
I've taken your feedback, done more research, and modified my plans. Initially, I planned to use ITX boards, but I realized it would require two machines to achieve my goals. Instead, I'll use that board to build a Proxmox staging server for a friend. My girlfriend agreed that I can have a full tower as long as it's not too noisy. So, here's my revised plan:

HDD Pool: Primarily for backups, media, and VMs with varying requirements for speed and data integrity. The HDD pool will use async writes for backups and media storage, optimizing performance and efficiency. For VMs where data integrity is crucial, sync writes will be employed, and a SLOG device will be used to enhance performance in these scenarios.
  • RAIDZ2: 8x 14TB WD Red Plus
  • Special: RAID 10 - 6x 118GB Intel Optane P1600X https://www.amazon.com/dp/B09MSB59SK
    • Note: 6x because I'm thinking I'll need more space than a regular 4x striped mirror would provide
  • SLOG: RAID1 2x 118GB Optane P1600X
SSD Pool: Primarily for databases and VMs requiring speed and data integrity.
Boot: Supermicro SATADOM 64 GB Internal Solid State Drive https://www.amazon.com/dp/B00NGBYUW4 (Single drive, no mirroring.. I'm a little bit worried about that, but that's probably just paranoia.)

The motherboard configuration is as follows:

PCIe Slots:
  • x16: Network
  • x16: SSD Slog via Asus QUAD M.2 bifurcation card
  • x16: HDD Special Metadata (4) via M.2 bifurcation
  • x16: Free
  • x8: HDD Slog via M.2 bifurcation
  • x8: HBA 1
  • x8: HBA 2
  • x8: Free
Additional Connections:
  • 2 M.2: HDD special metadata (2)
  • 2 Mini SAS HD: Connect to the 8x hot-swap HDD backplane on the chassis
  • 2 OccuLink: Free
  • SATADOM: Boot
This setup meets my needs and provides room for expansion. It leaves one x16 and one x8 free PCIe slots, four available RAM slots, two free OccuLink ports, and a 5.5" bay. I'm considering adding a 4x U.2 NVMe RAID10 to the 5.5" bay without a SLOG for ultra-fast work with async writes, like video editing. I can also potentially add a graphics card, double the RAM, and even add an L2ARC cache if necessary.

Edit: Formatting
 
Last edited:

NickF

Guru
Joined
Jun 12, 2014
Messages
763
I've taken your feedback, done more research, and modified my plans. Initially, I planned to use ITX boards, but I realized it would require two machines to achieve my goals. Instead, I'll use that board to build a Proxmox staging server for a friend. My girlfriend agreed that I can have a full tower as long as it's not too noisy. So, here's my revised plan:

Chassis: Supermicro CSE-743TQ-903B-SQ 4U 903W https://www.supermicro.com/en/products/chassis/4U/743/SC743TQ-903B-SQ Motherboard: ASRock EPYCD8-2T https://www.amazon.com/dp/B07PGLF6ZB/
CPU: AMD EPYC Naples 7281 https://www.amazon.com/dp/B07665GJTP
Memory: 4x Kingston 64GB DDR4 2666 LRDIMM H5ANAG4NAMR, from QVL
HBA: 2x LSI 9300 Dell HBA330 PCIe 3x8 https://www.amazon.com/dp/B00DSURZYS
M.2 Bifurcation: 2x ASUS Hyper M.2 X16 PCIe 4.0 X4 Expansion Card https://www.amazon.com/dp/B084HMHGSP
Network: Mellanox MCX455A-ECAT ConnectX-4 VPI Network Adapter PCI Express 3.0 x16 100 Gigabit Ethernet https://www.amazon.com/dp/B00PDDHAFM (Not saturating, but acquired one and want to try it out)

HDD Pool: Primarily for backups, media, and VMs with varying requirements for speed and data integrity. The HDD pool will use async writes for backups and media storage, optimizing performance and efficiency. For VMs where data integrity is crucial, sync writes will be employed, and a SLOG device will be used to enhance performance in these scenarios.
  • RAIDZ2: 8x 14TB WD Red Plus
  • Special: RAID 10 - 6x 118GB Intel Optane P1600X https://www.amazon.com/dp/B09MSB59SK
    • Note: 6x because I'm thinking I'll need more space than a regular 4x striped mirror would provide
  • SLOG: RAID1 2x 118GB Optane P1600X
SSD Pool: Primarily for databases and VMs requiring speed and data integrity.
Boot: Supermicro SATADOM 64 GB Internal Solid State Drive https://www.amazon.com/dp/B00NGBYUW4 (Single drive, no mirroring.. I'm a little bit worried about that, but that's probably just paranoia.)

The motherboard configuration is as follows:

PCIe Slots:
  • x16: Network
  • x16: SSD Slog via Asus QUAD M.2 bifurcation card
  • x16: HDD Special Metadata (4) via M.2 bifurcation
  • x16: Free
  • x8: HDD Slog via M.2 bifurcation
  • x8: HBA 1
  • x8: HBA 2
  • x8: Free
Additional Connections:
  • 2 M.2: HDD special metadata (2)
  • 2 Mini SAS HD: Connect to the 8x hot-swap HDD backplane on the chassis
  • 2 OccuLink: Free
  • SATADOM: Boot
This setup meets my needs and provides room for expansion. It leaves one x16 and one x8 free PCIe slots, four available RAM slots, two free OccuLink ports, and a 5.5" bay. I'm considering adding a 4x U.2 NVMe RAID10 to the 5.5" bay without a SLOG for ultra-fast work with async writes, like video editing. I can also potentially add a graphics card, double the RAM, and even add an L2ARC cache if necessary.
I have a 7282 you can have for free if you want to pay for shipping.
 

j1n37

Cadet
Joined
Apr 10, 2023
Messages
7
I have a 7282 you can have for free if you want to pay for shipping.
Wow, thank you so much for your kind offer! The 7282 is definitely an upgrade from my 7281, and I appreciate your willingness to part with it. If the offer is still available, I would be very interested. Could we discuss the logistics over PM? Thank you again for your generosity and willingness to help out a fellow community member.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
@NickF I could use a few 4TB NVMe's while you're at it :wink:
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
keeping the system as small and quiet as possible.
If your girlfriend has good hearing, I don't think the system will be quiet enough. Those eight 14TB drives would be the deal breaker. I'm drooling over some of the components you have listed.
 

NickF

Guru
Joined
Jun 12, 2014
Messages
763
@NickF I could use a few 4TB NVMe's while you're at it :wink:
Wish I had some to share xD

Wow, thank you so much for your kind offer! The 7282 is definitely an upgrade from my 7281, and I appreciate your willingness to part with it. If the offer is still available, I would be very interested. Could we discuss the logistics over PM? Thank you again for your generosity and willingness to help out a fellow community member.
I PM'd you.
 

j1n37

Cadet
Joined
Apr 10, 2023
Messages
7
@NickF, I apologize for the delay in updating you as previously promised. I encountered challenges such as needing to reflash the BIOS due to disabled GPU output and lacking the BCM password to modify the settings. Additionally, I faced real-life disruptions such as emergency surgeries and job transitions. I will be away on vacation for the next week, but plan to share my findings, build logs, learnings, and pictures in September on my newly established blog.

Here's an update on the changes to the system configuration:
  • Different motherboard
  • 2x 118G Optane for boot, 4800x for ZIL
  • Storage: 2x HGST Ultrastar 3.84TB, determined after running a file-size analysis command and selecting 128kb as the small block size
  • Replaced 2x IcyDock ToughArmor MB998SK-B with 3x Flex-FIT Quattro MB344SP (more economical but of lesser build quality). I added a 140MM Noctua fan for exhaust
  • Fast storage: 8x 1.92TB Samsung PM9A3 U.2 NVMe drives
I also manged to get my hands on 16x Hynyx HMAA8GR7MJR4N-XN 64GB DDR4-3200 ECC RDIMM, resulting in 1TB RAM. This might sound way overkill, but ZFS will effectively utilize as much RAM as you can throw at it.

Final Build Configuration:​

Chassis: Supermicro CSE-743TQ-903B-SQ
Mobo: Gigabyte MZ32-AR0
CPU: Epyc Rome 7282 (Special thanks to NickF)
RAM: 16x Hynyx HMAA8GR7MJR4N-XN 64GB DDR4-3200 ECC RDIMM
NIC: Mellanox MCX455A-ECAT ConnectX-4
Boot: 2x Optane P1600X
Fast Storage: 8x 1.92TB Samsung PM9A3 U.2 PCIe4 Gen4
Fast storage ZIL: 2x Optane 4800x; Mirror
Slow Storage: 8x16 WD Red Pros; RAIDZ2
Special vdev: 2x HGST Ultrastar SN630 3.84TB; Mirror
Slow storage ZIL: 2x Optane 4800x; Mirror

The build is incredible. Everything I ever wanted in a NAS. However, it's currently sitting in my garage due to space constraints. It's not wife-approved right now because we just don't have the room for it. The noise-level is actually okay, since it's drowned out by the noise from the freeway that is a block away from us.

Offsite-Backup NAS:​

Chassis: Supermicro CSE-936 (configured as a pseudo-tower by removing the ears)
Mobo: Supermicro H11SSL
CPU: EPYC 7281
Boot: 2x 64GB SATA-DOM
RAM: 8x32GB RDIMM
HDD: 16x10TB, 2x RAIDZ2
ZIL: Optane 900p x2
Special vdev: 4x HGST Ultrastar SN630 3.84TB (striped mirror)

This backup NAS was initially intended as offsite backup, but is temporarily serving as my primary slow storage. This is out of state, in Idaho. Somehow, they have symmetric 600MB upload/download speeds, so this _is_ working out for now.

Virtualized Fast Storage (TrueNAS in Proxmox):​

Chassis: Rackchoice 2U with Thermalright TL-8015 15mm Slimline fans
Mobo: Gigabyte MZ31-AR0
CPU: Epyc Rome 7D12 (32 core, 1.1Ghz, 85W TDP OEM proc)
Boot: 2x Optane
Storage: 4x Micron 7300 PRO 1.92TB U.2 NVMe Gen3
RAM: 8x64GB Samsung DDR4-3200 ECC RDIMM (allocated 256GB to TrueNAS; limited to 2933 MHz due to 2DPC)

Passed directly to TrueNAS:
  • SSDs: 8x Samsung PM983 1.92TB NVMe PCIe M.2 22110 (RAID10)
  • ZIL: 2x Intel Optane P4801X 22110 M.2 100GB Mirrored
  • L2ARC: 2x Intel Optane P4801X 22110 M.2 375GB Striped
Initially, I used 4x Optane P1600 118G configured as striped mirror for my ZIL, but it performed worse than a basic 2x mirror. Experimenting with the remaining two Optane drives, I discovered that using them as L2ARC provided frequent cache hits without significant diversion from RAM-based ARC. Upon upgrading to the P4801x, I configured 2x100GB for the ZIL and 2x375GB for L2ARC, an arrangement that made sense in this specific scenario, despite L2ARC often being unnecessary in most cases.

All of these builds have a Dynatron A26 (quiet, but quickly escalates to "is there a jet engine in my living room?), and Linkreal HBAs (great performance for the price).

I need to find a permanent spot for my TrueNAS build quickly. The current configurations are functional, but they leave me uneasy about the data. My backup NAS has become my primary slow storage, and I don't have a proper backup. Plus, it's a shame to have such a high-performance machine just sitting in my garage.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
The build is incredible. Everything I ever wanted in a NAS.
That is great!
However, it's currently sitting in my garage due to space constraints. It's not wife-approved right now because we just don't have the room for it.
That is not so great. I hope you are able to find some room for it.
it's a shame to have such a high-performance machine just sitting in my garage.
Agreed
 
Joined
Jun 15, 2022
Messages
674
@NickFThe build is incredible. Everything I ever wanted in a NAS. However, it's currently sitting in my garage due to space constraints. It's not wife-approved right now because we just don't have the room for it. The noise-level is actually okay...it's a shame to have such a high-performance machine just sitting in my garage.
What are you using for a UPS? And is getting rid of the wife an option? (that'll solve the "space" issue)
 
Last edited:

NickF

Guru
Joined
Jun 12, 2014
Messages
763
Regarding the space (i mean wife) issue...For what its worth, my electric bill last month was 900 dollars :) LMFAO

Thanks for coming back to keep us in the loop. Looks like you have quite the setup now!
 
Top