TrueNAS noob is joining the bandwagon - basic questions

alext

Cadet
Joined
Dec 18, 2023
Messages
5
Hi dudes and dudettes,

I have experience with Synology devices and Raid systems and I am converting my multiple Synology home system to a single TrueNAS server.

I got a HP Z440 (20 Core Xeon E5 with 128GB ECC, 512GB SATA SSD, 10Gbit Mellanox SFP+) for reasonable money and I am looking for pool/vdev best practices.

There are 6 data drives, 3x 4TB and 3x6TB. In my current setup these live on 2 separate Synology devices but I will all bring them into this HP server.
The 3x 4TB drives should be a RaidZ1 (currently Raid5) for 8TB total general file storage with multiple smb shares.
The 3x 6TB drives should also be a RaidZ1 (currently also Raid5) for 12TB total video file / edit storage with 1 smb share and an iscsi connector.

As far as I can see, I need to combine 3 drives together in one vdev, so 2 vdevs total, but how about the storage pool?
These 2 vdevs should have no interaction so do I need 2 seperate pools? It's a bit confusing atm, I can't seem to find the right answer.

Thanks a bunch!
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Yes, in order for you to have separate uses, not only would you have 2 vDevs, but 2 separate pools with 1 vDev each.

But, a more relevant method would be to combine those disks into 1 pool and create different ZFS Datasets for different uses.

I suggest you read up on ZFS, pool layouts, and other things. We, the forum members, have written some Resources on various subjects. Now most don't apply to first time users, but their are lots that do. See the Resources link at the top of any forum page.

You can also test out TrueNAS' GUI in a VirtualBox, or other VM scheme. Create some small, fake disks and play with configurations.


In general, RAID-Z1 is not recommended for disks over 2TBs due to the long re-silver times during a disk replacement. During that long time, another error may pop up on one of the existing disks, causing data loss. But, if you have good backups, it is a valid choice, (as long as you know the risks).

ZFS does something somewhat unique. If you find a file that actually has lost data, ZFS will tell you that file's name and PATH. Then, you can remove the bad file and restore it from backups. I've done this more than a dozen times on my Media server which uses a ZFS pool without redundancy for the media. (But, it's OS is Mirrored!)

Further, ZFS considers metadata, like directory entries, as important enough to have 2 copies. Even on a single disk pool, as well as RAID-Z1 vDevs. This means block failures that result in data loss, generally apply to file data and not metadata.
 

alext

Cadet
Joined
Dec 18, 2023
Messages
5
Thank you for your extensive answer.
Although I understand the concept, I dont know why RaidZ1 would not be recommended.
I have been running Raid5 with 1 parity drive on 3 disks on my NAS devices for years and years and never had an issue.

Ok, I used Vbox and created some virtual disks to tests. As far as i can see, i get the following storage space results:

2 pools with 3 disks each with RaidZ1 = 8TB + 12 TB = 20TB total (loosing 4TB in pool1 and 6TB in pool2 to parity)
1 pool with 6 disks with RaidZ1 = 20TB total (loosing 4TB to parity and 6TB because of inequal disk size)
1 pool with 6 disks with RaidZ2 = 16TB total (loosing 8TB to parity and 6TB because of inequal disk size)

It's a shame TrueNAS cannot use the "leftover space" on the 6TB drives when I am striping all 6 and only using 4TB on each disk. My Synology and Thecus NAS can do this no problem.

Also I didn't realize that TruNAS needs to install the apps on a pool drive and can't use the boot drive. So I will install a separate SSD/M.2 for the app data. Learned something.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
I have been running Raid5 with 1 parity drive on 3 disks on my NAS devices for years and years and never had an issue.
It's your data and you decide how much risk you want to take with it, but those of us who like our data don't use RAIDZ1 in order to avoid:
1. Being without redundancy (and error correction) with a failed disk situation
2. Pool loss if a second disk fails during resilver (high load on remaining pool member disks during resilver can be the catalyst for a failure... NB: not the same level of risk in Mirrors as resilvering is a simple copy, not involving head thrashing).

Go ahead and make your decisions, but be prepared to live with the consequences (have backups of the stuff you don't want to lose... RAIDZ isn't a backup)

It's a shame TrueNAS cannot use the "leftover space" on the 6TB drives when I am striping all 6 and only using 4TB on each disk. My Synology and Thecus NAS can do this no problem.
The methods employed to do that aren't exactly bulletproof, so are not included in ZFS.

Using "spare" or old hardware isn't a design factor for ZFS, so developers assume you will have appropriate hardware to bring to ZFS when using it... meaning all disks of equal size for a VDEV (or accepting temporary or permanent loss of additional space on larger disks in a VDEV).
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
The 3x 4TB drives should be a RaidZ1 (currently Raid5) for 8TB total general file storage with multiple smb shares.
The 3x 6TB drives should also be a RaidZ1 (currently also Raid5) for 12TB total video file / edit storage with 1 smb share and an iscsi connector.
On top of the other excellent advice, the iSCSI share (block storage) would do much better on a mirror ("RAID 0" while you're still transitioning). And this mirror should best be kept at 50% use or under.

SMB shares and file storage can go with raidz# for space efficiency. Kept under 80% use for performance.

Mind that you cannot move the filled drives from your previous Synology to TrueNAS. You have to back up, make your pool(s)—wiping drives in the process—and restore data back.

A possible layout with that could be:
  • iSCSI pool — 2 * 6 TB in mirror, 3 TB usable
  • general storage pool —
    3 * 4 TB + 6 TB in a single raidz1 (not recommended, but it's your data), 12 TB raw, 10 TB usable
    or 3 * 4 TB + 6 TB in a single raidz2 (safer), 8 TB raw, 6 TB usable
You can add more storage by adding more vdevs in the pool and/or by replacing drives in a vdev by larger ones. ZFS has no issue with vdevs of different sizes (4*4TB Z2 + 4*10 TB Z2 = 28 TB raw, 22 TB usable) but is not sympathetic to throwing your collection of mixed size disks in a single vdev and does not allow to change vdev type (raidz1->raidz2) after creation (which is why raidz2 from the start would be recommended with a view to later expansion).
 

rivimey

Dabbler
Joined
Dec 12, 2023
Messages
20
The point about Z2 or Raid6 that I learnt a while back is to do with error rates.

The exact uncorrectable error rates vary between disks and models, but while it used to be that the expected uncorrectable error rate for 1 drive was very much larger than the capacity of 1 drive, that is now no longer true. Instead, some consumer disk instances the drive error rate is such that you can expect 1 uncorrectable error simply by reading every sector of the drive a couple of times!

For example, given a Raid5 array, say 1 drive fails (lets say from old age). You're now down to a working array with no safety margin, and to resilver the new spare disk you need to read most blocks of every drive without any errors being encountered. This is sufficiently unlikely to work that many people trying it have seen data loss - not from the initial failure but from the subsequent resilver.

Hence: use of Z2 or Raid6: look for specs with uncorrectable error rates < 20* the drive capacity. Remember the datasheet specs are just expectation not a firm guarantee.

Whether you have personally seen this cause of failure or not is basically down to luck.
 

alext

Cadet
Joined
Dec 18, 2023
Messages
5
Thanks for all the comments.

Am I correct that TrueNAS can increase vdev size?

If I start off creating a 3x4TB + 3x6TB RaidZ2 (6x4TB total size and 16TB usable size), then replace the 4TB drives one-by-one later, to get a total of 6x6TB?
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Am I correct that TrueNAS can increase vdev size?

If I start off creating a 3x4TB + 3x6TB RaidZ2 (6x4TB total size and 16TB usable size), then replace the 4TB drives one-by-one later, to get a total of 6x6TB?
Correct.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Correct. You can add further vdevs and/or replace existing drives by larger ones. You cannot widen raidz# vdevs (6-wide -> 7-wide) zet (the feature is planned but do not count on it any time soon). You can not and will not be able to change raidz level (raidz2 -> raidz3).
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
The point about Z2 or Raid6 that I learnt a while back is to do with error rates.
Trouble is often that the real error rates vs advertised error rates seem to have more to do with market segmentation than reality.

As seen with the WD SMR “NAS” drive debacle, it is marketing / finance in charge of engineering, with predictable results. All these drives are made in the same facilities with the same basic stock. Firmware deltas can lead to differences re life in a NAS, also influence the suitability of a drive for NAS use, however.
 

alext

Cadet
Joined
Dec 18, 2023
Messages
5
Ok, I listened to the comments and changed the setup.

1. Crucial MX500 250GB as boot drive
2. 2x Crucial MX500 500GB as app pool, in mirror mode
3. The 2x 4TB Seagate Barracuda SMR drives are out
4. I am keeping the 2x 6TB WD Purple CMR drives (only 2 years old with barely any writes) and adding 3 new ones of the same WD63PURZ, for a total of 5x 6TB and I am going to use RaidZ2. About 18TB total usable space will be fine for a while.
5. Also got a LSI 9300-8i 12gbs HBA for the spinning drives, the SSD's will be on the mainboard on 2 separate controllers

Lets see if this will give me the result I am looking for. :)
 

rivimey

Dabbler
Joined
Dec 12, 2023
Messages
20
I would suggest not buying more purple drives - the point of Nas drives is they are more tolerant of being in groups in a common chassis (vibration etc) and they report errors faster, enabling nas error recovery to kick in earlier. Surveillance drives will be fine for 24/7 op but will be trying harder to silently recover from errors, masking them from zfs and the kernel.

Whatever drives you use, if possible mix the new and old up (avoid putting them in the same error recovery group) if possible, so if they fail for age-related causes they don't automatically take the array down.

If/when you extend the set, I would recommend adding a new vdev alongside the existing one, e.g. in mirror or another raidz2 of some shape; replacing 7 drives with larger ones one at a time will stress the array.

I decided a while back to go for "striped" mirror pairs, which loses some capacity but means I can extend the set with minimal additional hardware. So I have: vdev1=2x7.275TB, vdev2=2x7.275TB, vdev3=2*7.28TB on 6 nominally-8TB drives. Whether that is sensible I guess time will tell.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
Meh. I have HGST He10 drives of varying ages in my pool and to me this is a feature, not a bug (I bought them used from goharddrive.com). By varying ages, the probability that failure is random and won’t occur within a given short time span is higher because any age-related issues with a given batch are eliminated by virtue of varying drives at varying ages being sold to me.

Whether purple drives are a good choice is a different matter. I was under the impression that TrueNAS played nicely with consumer-grade drives in general by accepting waits as drives go through their TLER, ERC, CCTL, etc related wait states. Some drives even allow responses to be modified.

Either way, pool performance may suffer a bit on occasion (since errors are usually pretty infrequent in performant drives) but the issue is not nearly as dramatic as SMR drives holding the entire pool hostage as they flush their CMR buffer to SMR sectors.
 

alext

Cadet
Joined
Dec 18, 2023
Messages
5
I have read lots of articles, also in this forum, that the WD purple drives are pretty much trouble free and work just fine with TrueNAS. At some point i just want to stop researching forever and just go with it. They are all mounted in anti-vibration brackets and have 2x 140mm fans directly blowing over all 5 of them so not too worried. I'll just go with it.
 
Top