SOLVED First TrueNAS build - Plan validation and hardware advice

jcm1123

Dabbler
Joined
Aug 24, 2023
Messages
18
Hi all,

Looking to build a NAS for my network and want to validate my idea and get suggestions on hardware before pulling the trigger.

I've read the hardware guide and the ZFS/pool guide and also have TrueNAS Scale installed on a VM that I've been getting familiar with.

Current: I've got a Windows Server box acting as a file server and Docker host with a 4 disk Raid10.
Goals: Resilience > Performance > Usable space. Data will be to store database data, some VM disks, basic file store for clients, light streaming. Read to write ratio with my current setup is about 5:1. Up to 1 drive failure is acceptable.
Budget: Under $600 if possible but no more than $800. (I do have some of this hardware already)

TrueNAS Scale Box​

CPU: Intel <desktop|Xeon|whatever>
MB: ?
Ram: 2x32gb ECC ddr4
Case: something rack mountable (don't need hot-swap trays)
PSU: Single
OS: 2x 1TB M.2 nvme for OS (mirror = 1TB)
Data: 6x 4TB (3x mirrors, striped = 18TB)
Nic: Dual 10GB SFP

I will also backup the Data pool to my VM server that will have 2x 18TB (mirror = 18TB)

I think all of this is pretty straight forward. I would love some advice/help on
  • filling in the blanks for the CPU/MB/Case
  • best type of drive for my budget and requirements. Sata vs SAS?
  • is there a more optimal configuration for the drives/vdevs above?
  • since I'm read heavy, I think a read cache would help. Can I use a portion of the nvme os mirror above?
Thanks!
 

Fleshmauler

Explorer
Joined
Jan 26, 2022
Messages
79
Use the 2 1TB nvmes for vms and/or apps or as your cache. Get a 50GB (or whatever is a cheap semi reliable ~256GB) sata ssd (or two) for the bootpool.

Move the System Dataset Pool to your spinning rust when you're done (System settings>Advanced>Storage> System Dataset Pool) to keep the wear to almost zero on the boot drive(s).

Partitioning your boot drive would defeat your resilience first requirement.

I could be way wrong, but wouldn't a 3x mirror with a stripe be 8TB not 18? Why not use Raid-z2? I'm not sure the read performance would be that much different.
 

jcm1123

Dabbler
Joined
Aug 24, 2023
Messages
18
Use the 2 1TB nvmes for vms and/or apps or as your cache. Get a 50GB (or whatever is a cheap semi reliable ~256GB) sata ssd (or two) for the bootpool.

Move the System Dataset Pool to your spinning rust when you're done (System settings>Advanced>Storage> System Dataset Pool) to keep the wear to almost zero on the boot drive(s).

Partitioning your boot drive would defeat your resilience first requirement.

I could be way wrong, but wouldn't a 3x mirror with a stripe be 8TB not 18? Why not use Raid-z2? I'm not sure the read performance would be that much different.

Thanks for the advice. I was only going with the 2x nvmes for the boot pool since I already have them. I'd rather use them for caching if they'd go to waste as the boot pool. I guess I should stripe those for a read cache. From what I've read, the better way for write cache is to just add more ram.

Good catch, should be 12TB not 18TB. If there's not much performance difference (I guess I can test this before putting it in service) RaidZ2 would also work.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
Goals: Resilience > Performance > Usable space.
If that means that resilience is more important than anything else, you have gone for the wrong vdev layout. RAIDZ2 is better than 2-way mirrors in that respect (albeit considerably worse for random I/O, see below).
Data will be to store database data, some VM disks, basic file store for clients, light streaming. Read to write ratio with my current setup is about 5:1.
DB and VMs imply random I/O (as opposed to sequential).
Up to 1 drive failure is acceptable.
Can you elaborate on this? To me it is not clear what exact requirement is behind this.

Nic: Dual 10GB SFP
Just be aware that you will not even remotely be able to saturate this with 6 HDDs.

  • filling in the blanks for the CPU/MB/Case
Maybe I miss something, but to me the requirements are not clear enough here.
  • best type of drive for my budget and requirements. Sata vs SAS?
SATA, unless you have SAS connections from the motherboard or case backplane anyway and get the SAS drives cheaper than SATA ones. SAS drives provide no added value for your use-case, in fact they are worse in that SMART support is worse or even not existent.
  • is there a more optimal configuration for the drives/vdevs above?
You need to provide more details on your use-case. Esp. random I/O tends to be overlooked, although it has a tremendous impact on performance.
  • since I'm read heavy, I think a read cache would help. Can I use a portion of the nvme os mirror above?
No. First, partitioning of the boot device is not supported and also very likely causes issues down the road. Some people have done it, but it is a pretty advanced topic with, from my perspective, a bad value/risk ratio. The UI does not expect this and may therefor screw things up. After all, I want my NAS to be as reliable as possible, which is why I never considered this (although I have been running ZFS for more about 15 years).

Second, ZFS does not support a read cache as such. You could use an SSD as L2ARC, but that requires a suitable workload. If you have a spare SSD, you can always try and, if it does not help, remove the L2ARC later. But that I would consider this premature optimization, before you have good understanding of your ARC hit rate and related data about your system's performance under real-world load.
 

jcm1123

Dabbler
Joined
Aug 24, 2023
Messages
18
Hi ChrisRJ, thanks for the reply and great info.

If that means that resilience is more important than anything else, you have gone for the wrong vdev layout. RAIDZ2 is better than 2-way mirrors in that respect (albeit considerably worse for random I/O, see below).

Up to 1 drive failure is acceptable.
Can you elaborate on this? To me it is not clear what exact requirement is behind this.
In my scenario, the NAS will be in the same building (my house) as my me. So if one drive fails, and I get notified about it, I can shut the NAS down and replace it with a spare. I already follow the 3-2-1 backup method so I can accept the risk level of a 2nd drive failing while in the process of restoring. This is what I meant by one drive failure being acceptable as my tolerance/risk level.

I figured my two options here would be RaidZ1 or some combo of a mirrored stripe. After reading it seems like the later would be more performant. I do plan to do some testing on this once I get all the hardware to verify.

Just be aware that you will not even remotely be able to saturate this with 6 HDDs.
I already have extra 10gb nics and the network to support it, so essentially "free" at this point.

  • filling in the blanks for the CPU/MB/Case
Maybe I miss something, but to me the requirements are not clear enough here.
Sorry, I was vague here. I've read a ton of conflicting info on whether or not I need ECC. If I do need it, that would greatly increase the cost of this build and put me in the realm of non-consumer grade equipment that I'm not familiar with. If anyone can recommend 64gb/cpu/mb that all support ECC for under $500 I'm happy to go that route. If consumer grade will work without issue, then I already have some parts available to reduce the cost.

For the case, I'm just looking for a rack mountable that will fit the drives nicely. I'm only familiar with Rosewill but I'm sure there are better choices when it comes to drive mounting and air flow.

You need to provide more details on your use-case. Esp. random I/O tends to be overlooked, although it has a tremendous impact on performance.
Sure. First use case is a basic file share. It'll be used as a place the family can dump photos, docs, etc. I also map a drive it from my Windows development machine. So frequent reads/writes but not heavy, mostly small files, and more reads than writes.

The next part is new and not sure I have any good metrics around it which is to store VM disks/data. I've always used Docker desktop for my dev work (Postgres, Kafka, Mongo, etc). Same on my current file server for basic network apps like PiHole.

With the new NAS box, my current file server will be converted to ProxMox to move all the containers over. I'd prefer to have, for example, the data (or entire vm instance) from Postgres stored on the NAS due to the resilencey.


Second, ZFS does not support a read cache as such. You could use an SSD as L2ARC, but that requires a suitable workload. If you have a spare SSD, you can always try and, if it does not help, remove the L2ARC later. But that I would consider this premature optimization, before you have good understanding of your ARC hit rate and related data about your system's performance under real-world load.
L2ARC is what I meant here by a read cache. Several places I've been reading about TrueNAS refer to it as a "read cache" so have that stuck in my head now. Agree on the premature optimization and testing before committing.
 

jcm1123

Dabbler
Joined
Aug 24, 2023
Messages
18
Can't edit my posts but one other advantage I thought about for stripe/mirror over raidz was expanding later. I would only need 2 drives to expand the stripe/mirror where I'd need 6 more drives to expand the RaidZ config, if I understood how that works correctly.
 

sfatula

Guru
Joined
Jul 5, 2022
Messages
608
Ram better than l2arc, it's not a read cache like you might think. You likely will not need it based on what I read here. There are numerous motherboards supporting ecc ram for way under $500, I personally went with Supermicro used, I believe there is a sticky somewhere with hardware motherboards recommended. If you are willing to go used, that's your cheapest option. You do want a small boot pool device (or pair) as suggested earlier.

With 6 4TB drives, using 3 vdevs of mirrors, you'll have ~12TB space. With Raidz1, 2 vdevs of 3, you'll have ~16TB. Either way a hot spare (7th drive) would be useful as it's automated replacement of a failure (think vacation, whatever). Raidz1 risk for 3 drives of 4TB is pretty minimal. It's space vs performance and I doubt you'll have performance issues for your described workload, yes the mirror will outperform but it's meaningless if you can't even stress it though I'm not clear if clients will actually be using it or just home folks.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
Sorry, I was vague here. I've read a ton of conflicting info on whether or not I need ECC. If I do need it, that would greatly increase the cost of this build and put me in the realm of non-consumer grade equipment that I'm not familiar with. If anyone can recommend 64gb/cpu/mb that all support ECC for under $500 I'm happy to go that route. If consumer grade will work without issue, then I already have some parts available to reduce the cost.

I highly recommend to look into used server gear from Supermicro. There is a ton of stuff out there (esp. in the US) and you can make a real bargain. Not only in the sense of cheap, but also as in good value for money. Some people are afraid of going for used components, because they assume a sooner death. For motherboard, CPU, and RAM that is really not an issue. For power supply and HDDs, things look different and I would indeed buy new.

As an example, I bought my X9 board about 3 years ago, when the technology was already about 9 years old. Today I would probably look for an X10 or X11 board, but the general point still applies. For me the main driver was RAM price. Just a few weeks ago I could get another 192 GB where I paid 22 Euros for a 32 GB module. In my case it is DDR3 ECC RDIMMs and DDR4 will be more expensive. But used RDIMMs (for servers) will be a lot cheaper than new UDIMMs.

Overall it should be no problem to stay within your budget of USD 500.

The next part is new and not sure I have any good metrics around it which is to store VM disks/data.
That is certainly a strong indicator that you should go for mirrors, when using HDDs for this. Depending on the amount of space needed, you could also have a RAIDZ1/2 pool with HDDs for normal data and a separate SSD mirror for VMs.
 

jcm1123

Dabbler
Joined
Aug 24, 2023
Messages
18
As an example, I bought my X9 board about 3 years ago, when the technology was already about 9 years old. Today I would probably look for an X10 or X11 board, but the general point still applies. For me the main driver was RAM price. Just a few weeks ago I could get another 192 GB where I paid 22 Euros for a 32 GB module. In my case it is DDR3 ECC RDIMMs and DDR4 will be more expensive. But used RDIMMs (for servers) will be a lot cheaper than new UDIMMs.

Overall it should be no problem to stay within your budget of USD 500.

I've been researching for hours now and still a bit lost, mostly because I'm not familiar with the Xeon lineup.

I'd like a
  • single cpu board
  • 8x Sata connections. 2x for my boot mirror and 6x for my data array. Which limits the choices to C621/C622 chipsets... I think.
  • Supports up to at least 128gb ram
  • At least 1x m.2 slot
These seem to put my in either the X11SP? or the X11SC? both of which I can't find for under $400 on eBay for the few listings they have.
Is there a recommended vendor that sells used/refurb parts online?

I can make another thread in the hardware section if it's more appropriate.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
I would recommend to browse the X10 and X11 boards to get a feeling what is out there.

As to SATA connections, the conclusion about the chipset is wrong in that various boards have a SAS controller that can also be used to connect SATA drives.

For NVMe you can also add a cheap adapter card, if the board has no slot. You cannot boot from it in this case, but is no issue IMHO. Just use a small SATA SSDs as boot drive.

For reputable sellers, this is coming up pretty regularly. Not sure, though, what a good search keyword would be in the forum. But I guess that Google will also have something helpful here.

The best deals will usually be when you choose a case with the right board, but do not insist on the amount of RAM or type of CPU. At least that is my gut feeling.

Good luck!
 

jcm1123

Dabbler
Joined
Aug 24, 2023
Messages
18
Following up on this thread for anyone else in the same situation.

I was able to score a X10SRL-F board with a E5-1650V4 and 64GB DDR4 ECC for $200 from eBay. This did take many hours to find a solid that was within my budget. It appears that most SuperMicro boards for sale (at least on eBay at the time) are dual socket, which makes sense since they're server boards but I only needed a single socket, so less choices out there. The only X11 board I could find within my budget was the X11SSH and it was around $200 just for the board.

I also learned a lot about SuperMicro, which have some awesome MBs. Thanks @ChrisRJ for all the help!
 
Top