Hi from south of Sydney Australia


Apr 19, 2021
Hi All,
I am heading down the TrueNAS Scale rabbit hole :) ......
Currently run a QNAP TR563 setup with an expansion enclosure but keep on maxing it out.
I have been experimenting with Docker running, Home Assistant, MQTT, NODE-RED & few other apps,
also have a large PLEX libarary, photos & home videos.
Time to move to something bigger & better.
Setup is going to be a
Intel S2600CP Dual LGA2011 Motherboard with 2x E5-2620V2 32GB RAM
2 x 1TB SSD install drives
8 x 8TB Seagate RED NAS HDD
2 x IT Mode LSI 8 Port SATA cards
16 x Bay Server case
My question is, is 2 x 1TB SSD's overkill for the installation drives, I have searched the community & haven't found recommendations for this.
I want to run 2 x drives so there is redundancy?
Can you expand the ZFS drive as I have 4 drives in the current NAS that have data on them that needs to be moved to the TrueNAS setup before I can move the drives?
Thanks for your help


Vampire Pig
May 19, 2017
Greetings from the opposite site of the globe!

For installation drives, I am using two 64GB SATADOMs. So, yeah, those installation drives are overkill. Figure on maybe 4-5 new images a year if you consistently upgrade and even the OEM 16GB SATADOM I got with my Mini XL lasted for about 2 years before I had to delete older system images. Thankfully, the system notified me.

You could try to dual-purpose these boot drives (i.e. boot and L2ARC, for example) but this is not supported by iXsystems, the GUI, etc. and you'd have to do it from the CLI. If you go down this path, please be sure to use UUIDs, not the short partition names that are convenient but which may present the wrong drive partition to the system on boot if the stars align just right.

AFIAK, RAID-Z expansion is still in development, so folk here still add a another VDEV as a pool approaches 80% full (a benefit of adding a VDEV includes faster IOPS). Alternatively, one can also replace smaller drives one by one with larger ones, let the system resilver each time, rinse repeat until the whole pool has only larger drives in it, at which point it will expand to the higher capacity.

With 16 bays, I'd consider something like a Z2 6-drive VDEV to start. That leaves you with the option of adding a second 6-drive Z2 VDEV in the future and leaves 4 drive bays empty for either boot drives, fusion pool, or VM mirror SSDs. You can use VDEVs with lower disk counts but that will make the price of parity higher.

I chose a 8-drive Z3 VDEV because my net available capacity is about 50% after accounting for parity losses (3 drives worth) and the soft capacity limit of 80% fill (after which ZFS write performance craters) , which for the remaining 5 drives amounts to one drives worth. So I start with 8 drives and end up with 4 drives of capacity. You could obviously do the same, it all comes down to how much you value your data / backups / and so on. Z2's are perfectly fine for folk who can monitor and react to failures relatively quickly.

I'd have a think re: a dual processor CPU / motherboard vs. some of the solutions mentioned in the hardware recommendation pages. Unless you do a lot of CPU-heavy work (transcoding and the like), the second CPU may be superfluous. My Z3 pool runs on a lowly D-1537 and that thing is maybe 6-10% busy most of the time. The only time I saw a core getting maxed out was when I was giving SMB a bad hair day with a bad NTP time reference.