I'm about to start my TrueNAS adventure and apart from some hardware and cache related questions my biggest headache right now is ZFS pools and vdevs layouts. I have noticed in signatures of several experienced (by tenure and number of posts) members of this forum that they use ZFS pools with multiple mirrored pairs vdevs. Therefore I'm curious, is it really the best layout for my use case?
So basically I have some beefy server with 16 cores and 256GB registered ECC DDR4 over 10 GbE network and want to store some:
1) Personal + work/business document files (Frequently used on a daily basis, critical, I do not want to lose)
2) Personal photos/videos backed up from devices of all my family members (Weekly dumps, infrequent reads, critical, I don't want to lose)
3) Plex media server with 1080p movies and tv shows (sequential reads of big files, non-critical, I will be fine if I lose this data)
4) SMB and NFS shares for VMs I create in my hypervisor (random read/writes, non-critical, will be fine if I lose this data)
So I have 8 new drives 16tb each now and thinking about 3 layouts:
A) 1 x raidz3 VDEV with all 8 drives (storage capacity of 5 drives, 3 for parity)
B) 2 x raidz2 VDEV with all 4 drives in each (storage capacity of 4 drives, 4 for parity)
C) 4 x mirrored pairs with 2 drives in each pair (storage capacity of 4 drives, 4 for parity)
Along with that I will add a special VDEV in a 2 drives mirror for metadata. I found some Intel Optane drives with 10 DWPD over 5 years for this purpose. I will not add any L2ARC or SLOG since I feel like I have enough of RAM.
The goal is to get the best read/write performance over the 10GbE now and 25GbE in future. I have found some online raidz calculator tools, but they all say no matter what raidz layout I chose there will be no any write speed gain, only read speed gains are possible, is it true?
And also I do not quite understand, if for example 1 of my drives in one of the VDEVs will fail in each of the above A,B or C layouts, what has higher chances for a second drive failure during the rebuilding process? For me it seems like mirrored pairs are the most scary, since only 1 drives from that pair of a failed drive will do all the heavy lifting to rebuild data to a newly added drive. Whereas, in wider 8 drives or 4 drives VDEVs this load to rebuild a drive is distributed among several drives.
Mirrored pairs layout also sounds tempting because it is the easiest to scale, just add 2 drives every time your storage pool gets to a certain level of load. For example if it gets filled to about 50% of capacity I would add 2 drives. I do not want to wait til it gets to 70-90% load since then the load will not be evenly distributed among mirrored VDEVs and priority will be given to new empty drives thus affecting read/write performance.
Please correct me in my assumptions, since I feel like I could be wrong.
So basically I have some beefy server with 16 cores and 256GB registered ECC DDR4 over 10 GbE network and want to store some:
1) Personal + work/business document files (Frequently used on a daily basis, critical, I do not want to lose)
2) Personal photos/videos backed up from devices of all my family members (Weekly dumps, infrequent reads, critical, I don't want to lose)
3) Plex media server with 1080p movies and tv shows (sequential reads of big files, non-critical, I will be fine if I lose this data)
4) SMB and NFS shares for VMs I create in my hypervisor (random read/writes, non-critical, will be fine if I lose this data)
So I have 8 new drives 16tb each now and thinking about 3 layouts:
A) 1 x raidz3 VDEV with all 8 drives (storage capacity of 5 drives, 3 for parity)
B) 2 x raidz2 VDEV with all 4 drives in each (storage capacity of 4 drives, 4 for parity)
C) 4 x mirrored pairs with 2 drives in each pair (storage capacity of 4 drives, 4 for parity)
Along with that I will add a special VDEV in a 2 drives mirror for metadata. I found some Intel Optane drives with 10 DWPD over 5 years for this purpose. I will not add any L2ARC or SLOG since I feel like I have enough of RAM.
The goal is to get the best read/write performance over the 10GbE now and 25GbE in future. I have found some online raidz calculator tools, but they all say no matter what raidz layout I chose there will be no any write speed gain, only read speed gains are possible, is it true?
And also I do not quite understand, if for example 1 of my drives in one of the VDEVs will fail in each of the above A,B or C layouts, what has higher chances for a second drive failure during the rebuilding process? For me it seems like mirrored pairs are the most scary, since only 1 drives from that pair of a failed drive will do all the heavy lifting to rebuild data to a newly added drive. Whereas, in wider 8 drives or 4 drives VDEVs this load to rebuild a drive is distributed among several drives.
Mirrored pairs layout also sounds tempting because it is the easiest to scale, just add 2 drives every time your storage pool gets to a certain level of load. For example if it gets filled to about 50% of capacity I would add 2 drives. I do not want to wait til it gets to 70-90% load since then the load will not be evenly distributed among mirrored VDEVs and priority will be given to new empty drives thus affecting read/write performance.
Please correct me in my assumptions, since I feel like I could be wrong.