Advice/input on components I picked for first TrueNAS build

Triggeh

Cadet
Joined
Jun 13, 2022
Messages
3
Hi, I'm looking to move to TrueNAS Scale with a ATX case and have picked out a few parts I would like some input on/alternatives to.
The machine will mainly be used as a torrent box, network share for media with Jellyfin/Plex, will use Docker for one basic app and possibly pihole, pfsense and openVPN.

Case: Antec P101:
unknown.png

Chose this because of easy to access 8 3.5" tool-less bays, but it is expensive at $149 AUD, if there are other recommendations I'm all ears.

CPU: Intel i3 12100E
The reason I'm looking at 12th gen is due to Alderlake bringing back ECC Ram support on some models (on 12100E but not 12100) + possibly needing Quicksync
I'm open to other suggestions, but I do want something decent that isn't incredibly power hungry/old with limited instructions

Motherboard: https://smicro.eu/supermicro-mbd-x13sae-f-b-1
This motherboard is on the W680 Chipset which is the only Alderlake platform with ECC support. It one 2.5gbit port and one 1gbit port as well as IPMI, which I don't know how often I will use but seems very useful compared to having to pull out a monitor and kb just to change some setting in the bios!
It has all the features I could dream of with 8 sata ports and 3 m.2 NVME ports but I can't find it for sale anywhere yet!
Asrock has two boards with 3 2.5gbit Ethernet, but no IPMI put me off: https://www.asrockind.com/IMB-X1314 & https://www.asrockind.com/IMB-X1712

Other questions:
As TrueNAS Scale doesn't have ZFS support, how important is ECC memory? If it's not needed, I can re-use my current 32GB DDR4 kit once I upgrade to DDR5 in a few months.

I have 6 drives ready to put in the system, I'm looking at running Raid 1 on 2x8tb, 2x 16tb and 2x18tb drives. Is this okay, or should I be looking at doing things differently? I'm happy with having one point of failure for the drives, but don't know much about how hard it is to add more redundancy later on with Raid 1 etc.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
As TrueNAS Scale doesn't have ZFS support
Completely untrue... it only supports ZFS.

Decide on your own about ECC, but the question really is "how much do you care about your data getting a little (or a lot) corrupted?"

I'm looking at running Raid 1 on 2x8tb, 2x 16tb and 2x18tb drives. Is this okay, or should I be looking at doing things differently?
ZFS terminology for that is mirrors.

If what you want is maximum IOPS out of your pool, you wouldn't want to go with unequal VDEV sizes, since that will mean writes can't be spread evenly across the VDEVs. (eventually the 2 largest VDEVs and finally only the largest VDEV will get writes).

If you simply want a large pool and don't need to think about maximum IOPS (block storage and sync writes), then uneven sizes are fine and you can add VDEVs later easily with additional mirrored pairs of disks.
 

Triggeh

Cadet
Joined
Jun 13, 2022
Messages
3
Thanks. Some more questions, 1GB of ram is recommended per terabyte of storage. Is this still the case for archived data that won't see much reading/writing to? Most of my data is more long-term backups and won't need to be accessed very often.

My plan is to put each set of two drives in a ZFS mirror, so 2x18tb drives mirrored. Can I then convert that to Raidz with three drives later easily?
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
(eventually the 2 largest VDEVs and finally only the largest VDEV will get writes).
Even if we're in the 'simplified land' atm, I'd think it is worth mentioning that it is not simply 'round robin' distribution between vdevs.

These days (since Storage Pool Allocator behavior was updated in 0.7.0) I believe it is based off the quickest vdevs to "respond first" will receive the write. Which basically works as a passive balancing tool. This is because when drives are rather full they tend also to be slow and/or fragmented. Adding a fresh empty VDEV to such a pool will not distribute writes evenly across the new and the old vdev.
What will happen is that the faster vdev will receive more writes.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
These days (since Storage Pool Allocator behavior was updated in 0.7.0) I believe it is based off the quickest vdevs to "respond first" will receive the write. Which basically works as a passive balancing tool. This is because when drives are rather full they tend also to be slow and/or fragmented.
Which just emphasizes my point that for best IOPS, you need the hardware in each VDEV to be "as exactly the same as possible" to ensure they all get the equal number of writes sent to them.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
Which just emphasizes my point that for best IOPS, you need the hardware in each VDEV to be "as exactly the same as possible" to ensure they all get the equal number of writes sent to them.
Yes, I agree with all of your points.
This was just adding some icing on the cake, not trying to correct (or simliar) you.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,949
Is Alderlake supported yet. I believe in general no - but is for some I3's as they don't have P&E cores. Or something like that. Someone might like to confirm that I am not talking complete rubbish
 
Top