TrueNAS newbie looking for advice or reassurance

grl570810

Cadet
Joined
Jan 3, 2021
Messages
4
Hello all I am new to this forum and to TrueNAS.

Background: I have been in IT since the 1970s (before it was even called "IT" LOL), have worked on everything from mainframes to PCs. My *nix experience though is nearly twenty years ago and I have pretty much been Windows only for the last 15 years. I have just picked up a TrueNAS machine that I wish to utilise as such, rather than building as a Windows server and a trying to get my head around how some of the technology & terminology of this system maps onto the concepts I know.

Further background: My home / home office server is an old Windows HP MIcroserver and, great system though it is, it's beginning to creak at the seams. It runs file sharing, WSUS patching, various Windows SQL databases, and Plex Media Server for our music and videos. So when the TrueNAS machine cropped up jumped on it with the idea of moving the file sharing and Plex functions to that, leaving the Windows machine to do the windows stuff. The specs of the new machine are:
Gigabyte H270N WiFi Mini ITX Motherboard
Intel Core i5-7500 Quad Core 3400MHz
Be Quiet! Pure Rock Slim CPU Cooler
8GB Memory
Corsair RM550X 550W 80 Plus Gold PSU
1 x 120GB Kingston SATA Boot SSD
1 x 500GB M.2 SSD
5 x 4TB SATA Drives
2 x 1GbE Ethernet
and it came with TrueNAS TrueNAS-12.0-RELEASE reset to default configuration.
In terms of load I have maximum six SMB clients accessing files (and only two of those use it extensively), music tends to be streaming most of the day and in the evening maybe two movies or TV shows will stream.

This is what I plan:
Create a Jail to run PMS - this I put on a pool containing the single SSD drive. My understanding is that a Jail is the equivalent of a virtual machine in Windows or VMWare is that right? I don't need any resilience for this disk, if it goes down all I lose is the PMS service and I can easily replace the drive and recreate that, correct?

The 5 4Tb I used to build a ZFS 'array' (is that a VDEV?) to hold all the data shares and the media that Plex streams. This is where I am unsure about the concepts - ZFS doesn't map to RAID arrays one to one. If I were doing this in RAID I'd build a 8Tb RAID 10 array with a hot swap. My best guess at what to do in ZFS is drop them all into a single ZFS3 array as then I am resilient to losing two disks (yes or no?). High availability is not an issue I will happily shut down to replace a failed drive - the hardware is not hot swappable. Does that make sense or is there a better way of doing it? I am also assuming in this concept that the PMS service running in its Jail can access the media files on this ZFS array is that correct?

If that's a load of rubbish I'm happy to be corrected, I'll accept any advice on a better way to do things, or if it's a good solution then I'll appreciate reassurance!

Thanks in Advance,
Graham
 

Flashbin

Dabbler
Joined
Jan 29, 2020
Messages
17
Hi Graham,

Your plan overall sounds good. I would hovever get more RAM, 8GB is the minimum recommended for TrueNAS and you have quite a lot of disks + the Plex Jail is also going to use a litte bit.

My understanding is that a Jail is the equivalent of a virtual machine in Windows or VMWare is that right?
Yes and no. A FreeBSD Jail shares the Kernel and Hardware with the Host OS, so it has less overhead than a traditional VM. It is more like a Paravirtualized VM. On TrueNAS, you can also create VMs (these would be equivalent to a VM in Windows or ESXi) with bhyve as a Hypervisor.

My best guess at what to do in ZFS is drop them all into a single ZFS3 array as then I am resilient to losing two disks (yes or no?).
With RAIDZ3 you are resilient to losing three disks. With RAIDZ2, two disks. etc.

I recommend reading this primer and reading the docs is also very helpful.
 

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
@grl570810 Welcome to the forum. I'd agree you are light on memory, 16GB would be better, but testing may show you can get away with 8GB. As it's a miniITX board, great if you've a single 8GB module now, not to so good if you need to swap 2x4GB for 2x8GB. While not technically essential, ECC memory is always best, perhaps that might be a longer term aim for you with a different m/board

Not a plex user myself, but people always talk about CPU passmark in terms of the numbers of concurrent streams supported, especially if 4k and/or transcoding is needed.

Your "8Tb RAID 10 array with a hot swap" equates to a zfs pool made up of a stripe of mirrors with a spare. i.e two vdevs in the pool where each is a mirror. . Whether this configuration is right for you or 4 disks in a raidz2 single vdev pool will work for you might become clearer after reading some the reference material in the resources section of the forum and/or in the various posting on the iXsystem's blog. Another useful zfs primer can be found here: https://arstechnica.com/information...01-understanding-zfs-storage-and-performance/

One thing to keep in mind, is that you need to get your pool design correct first time as the ability to change zfs pool layouts is limited. The usual trade off between performance, capacity and redundancy still applies, as does the old adage that RAID is not a backup.
 

iposner

Explorer
Joined
Jul 16, 2011
Messages
55
If all you've got is 5 matching disk drives, if you've got a hot swap, then how many drives do you want to lose for parity? I would consider a RAIDZ1 plus hot-swap which will leave you with 3 drives of usable space. Either that or go with two sets of mirrored drives and a hot swap, which will give you a much faster rebuild time.

Personally, I would build all 5 drives as a RAID-Z1 and then set up a cloud sync job to back up the lot to a cloud blob store.
 
Last edited:

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
I would consider a RAIDZ1 plus hot-swap

This config does not make any sense... Do a Raid-Z2 in this case. A RaidZ1 with hot swap gives you 3 drives of usable space and will need to be re-silvered whenever a disk fails. A RaidZ2 would offer the same available space using the same drives but would be "pre-" silvered in case of a failure.

RaidZ1 is not meant to offer a good enough protection for drives bigger than 1T. And because 1T drives are of not much use today, RaidZ1 is not of much use anymore either.

For the need that is described here, I would go with a single RaidZ2 vdev of 5 drives. It will offer good redundancy and a good usable space. It will be only a single vdev instead of 2 with mirrors, but it should be enough here. With only the strict minimum of RAM, there is no need to shoot for highest performance.
 
Joined
Jul 2, 2019
Messages
648
HI @iposner - Welcome aboard. You're doing the right thing by asking questions first rather than trying to fix them later. :smile: Be sure to check out the Resources!

Here are some thoughts:
  • Motherboard:
    • The WiFi will not work to the best of my knowledge
    • Make sure you are not using any onboard RAID. ZFS needs direct access to the drives. If you check the forums, LSI cards are well supported. If you buy an LSI-based HBA, make sure that it isn't "fake" - seems to be lots of them on eBay from China that are not genuine
  • ECC RAM:
    • You are doing the right thing - ZFS "RAID" is not a backup
    • That said, depending on how important your data is will likely drive how much protection you put "on" it
    • Generally, RAM is also used for caching so the more the better. I have 32GB most of which is used for cache
    • MANY discussions in the forums on ECC :smile:
  • Drive layout: @Heracles beat me to it
One other thought: On the motherboard - you might want to check on the used market for a SuperMicro board that will allow ECC RAM and gives you IPMI (e.g., virtual KVM).
 

iposner

Explorer
Joined
Jul 16, 2011
Messages
55
The only question I have with RAID-Z2 and 5 drives is what is the overhead of the additional parity on write performance?
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
what is the overhead of the additional parity on write performance?

Not much because, by definition, it is written on a disk that would have been idle as a hot spare... So that disk is 100% free and dedicated to write this info. As for CPU, no need to compare CPU speed vs HD speed...
 

iposner

Explorer
Joined
Jul 16, 2011
Messages
55
Interesting you say this as the performance of a conventional RAID6 array is poorer than a RAID5 array, which in itself has far poorer write performance than read performance.

As the parity is striped across all drives in the set, any one logical write results in two physical writes on RAID5/RAIDZ1 and three on RAID6/RAIDZ2. When you have a small number of physical drives, this has to result in a significant performance hit. (Disclosure - I'm a high performance database expert and I would never suggest dual parity for a high performance system).
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
conventional RAID6 array is poorer than a RAID5 array

ZFS is not RAID...

As the parity is striped across all drives in the set, any one logical write results in two physical writes on RAID5/RAIDZ1 and three on RAID6/RAIDZ2.

This is not true for ZFS. ZFS does not need all drives for every single thing like RAID does.

Disclosure - I'm a high performance database expert and I would never suggest dual parity for a high performance system

Never would I recommend RaidZ for high performance random access like database. Such a use case requires mirrors.
 

grl570810

Cadet
Joined
Jan 3, 2021
Messages
4
Thanks guys for all your responses, they've been useful and I have been off doing more reading as a result. I believe I have it clear in my head now.
I'll just add a few comments and further responses to various points:

  • RAM: I'll try with the existing 8Gb and see how it goes. The motherboard has two slots and a single 8Gb installed so I can go to 16Gb without any trouble if performance is inadequate. However intense the ECC debate may be it's irrelevant in this case as the motherboard only supports non-ECC. :)
  • The WiFi will not work to the best of my knowledge: correct but I don't care! The box is going to sit direct on the back of the Gigabit switch that is the 'hub' of the network in my 'computer room' (a.k.a. the basement garage).
  • Make sure you are not using any onboard RAID: there is none, in fact the whole reason I found out about ZFS in the first place was I looked for it in the BIOS and couldn't find it....
  • Disk layout: thanks to the replies I realise now I was on the wrong track initially; I hadn't clicked that the n in RAID-Zn refers to the number of parity disks I thought they were different types of RAID as in RAID5 or RAID6. My bad :( . So I am now going for the whole 5 disks as RAIDZ2 with heaps of monitoring / alerting to warn of any impending disk failures. As I said I don't really need hot swap as high availability is not a necessary thing for me. @Heracles I very much agree with your view of backups I operate with multiple redundant copies of data onsite that sync overnight to a temporarily powered on external hard drive, plus a rotating group of three external hard drives where I bring one onsite roughly weekly and sync, then take away again. They live at my daughter's so not 400Km away, more like 10, but after any disaster that can take out both the onsite and offsite copies I am unlikely to be around to worry!
Once again thanks all, my next challenge is getting the Jail for PMS up and running but I'll start a separate thread if I have any troubles with that.
 
Top