- Joined
- May 17, 2014
- Messages
- 3,611
There, fixed that for you...
The point of all that, never trust any hardware.
There, fixed that for you...
The point of all that, never trust any hardware.
Note that the following use the same amount of parity disks, 6. But, the RAID-Z2 will get slight better read and write performance as it's 3 x vDevs, each with one disk less parity.
However, since writes would be less important for a mostly read media server, you might take that in mind. I also like free slots and or warm spares. So something like this;
- 3 vDevs of 8 drives - RAID-Z2
- 2 vDevs of 12 drives - RAID-Z3
You may even be able to get away with RAID-Z2. It's pushing the limit for RAID-Z2 at 11/12 disks. But with a warm spare ready to go, that helps reduce the risk.
- 1 vDev of 11 drives - RAID-Z3
- 1 vDev of 12 drives - RAID-Z3
- Warm spare, or free slot and cold spare
Please note that (slightly) mis-matched vDevs are both allowed and won't impact performance much. Especially for the home or non-business use case.
Note: What I define as a warm spare, is a disk installed in the server, ready to replace a failed disk. But, requires operator intervention. Hot spares don't require operator intervention. And while hot spares are supported by ZFS, that feature is generally not used. Partly because you can't tell ZFS which vDev is more important to use a hot spare disk when you have failed disks in multiple vDevs.
I'd still recommend 3x 8-way RaidZ2
But I guess you have a few options
12x mirrors
4x 6way Raidz2
3x 8way RaidZ2
3x 8way RaidZ3
2x 12way RaidZ3
For 12, 8, 6, 9, and 6 disks of parity.
I've seen quite a few stories where people who went with massive wide RaidZ3 have ended up converting to narrower RaidZ2 because of performance problems. I specifically refer to @cyberjock and @Chris Moore (I think)
I vaguely remember reading somewhere that cyberjock had problems with either a 12 or 14 disk Raid-Z3.
How do you find the performance of the 3 x 8 disk Raid-Z2
I have no complaints. I actually just received an X550-T2 10gbe card to add to my system, so I'll be playing with 10gbe benchmarking in the near future ;)
Oh, I'm only on 2 vdevs at the moment. Built it with one, extended to two in June. Plan to extend to three in year or two based on data growth.
So, I have 8 bays free.
I did test 6 8TB drives in the spare bays recently as a separate pool for thermal testing. End result, Need to improve cooling, so have a little bit of maintenance planned. Will add a p3700 for slog, the 10gbe, and the Noctua industrial fans and a couple of SSDs for boot etc. And turn it into an ESXi/FreeNAS box. The pool etc will not change.
Been testing the upgrade in my mini Node 304/XeonD rig which is like a baby version of it :)
That sounds awesome :)
10gbe is really something! Will you be purchasing a 10gbe switch? I cannot recommend enough the Netgear ProSafe XS series
Yes, that was me. I had setup a 12 drive RaidZ3 pool (single vdev) and found the performance to be too slow to even keep up with wire speed for a 1gig network. I ended up backing all my data up, destroying the pool and making two vdevs of 6 drives each in RaidZ2. Performance is much better. More vdevs generally equates to more IOPS which generally equates to better perceived performance especially with random IO or small files but it still helps when dealing with larger files like the media (photos and videos) that is my primary data at home.I've seen quite a few stories where people who went with massive wide RaidZ3 have ended up converting to narrower RaidZ2 because of performance problems. I specifically refer to @cyberjock and @Chris Moore (I think)
Yes, that was me. I had setup a 12 drive RaidZ3 pool (single vdev) and found the performance to be too slow to even keep up with wire speed for a 1gig network. I ended up backing all my data up, destroying the pool and making two vdevs of 6 drives each in RaidZ2. Performance is much better. More vdevs generally equates to more IOPS which generally equates to better perceived performance especially with random IO or small files but it still helps when dealing with larger files like the media (photos and videos) that is my primary data at home.
The 4 way 6 disk Raid-Z2 seems a bit too much parity. I'd go with 3 separate vDevs, but still one pool;Thanks for the insight. So looks like I might be better to go with Raid-Z2 for my purpose.
Any recommendations for/against the following
I'd rather decide on the most suitable Raid from the beginning so hopefully I only have to copy files once to the server
- 4 way 6 disk Raid-Z2
- 3 way 8 disk Raid-Z2
The 4 way 6 disk Raid-Z2 seems a bit too much parity. I'd go with 3 separate vDevs, but still one pool;
You could go with 2 free slots as well, if you want both a warm / hot spare, and a free slot for putting in / removing a backup disk.
- 7 disk Raid-Z2
- 8 disk Raid-Z2
- 8 disk Raid-Z2
- Free slot for warm spare or backup disk.
8 disks in raid z2 is what I did. I currently have 2 vdevs and I plan on putting in one more vdev
Sent from my Nexus 5X using Tapatalk
I would go with the 4 way 6 disk RAIDz2 which is what I had planned to work up to by adding additional vdevs as my storage need increased.Any recommendations for/against the following
I'd rather decide on the most suitable Raid from the beginning so hopefully I only have to copy files once to the server
- 4 way 6 disk Raid-Z2
- 3 way 8 disk Raid-Z2
One comment and that to test and burn in those drives you have sitting on the shelf. Would be even worse to replace a bad drive with a bad drive.I would go with the 4 way 6 disk RAIDz2 which is what I had planned to work up to by adding additional vdevs as my storage need increased.
I think having more parity is a good thing to reduce the possibility of a fault taking the pool out.
The statement from @Arwen about too much parity is due to the fact that each of the 4 vdevs gives you 2 drives worth of parity data, so 8 of the drives would effectively be dedicated to redundancy vs 6 following the other suggestion.
At the same time Arwen is suggesting having an online spare. I have cold spares sitting on the shelf instead of having them in the server getting old and accumulating power on hours. With your drives being 8TB each, it would be about 14TB difference in available storage between the two options. I would keep a couple spares on hand (sealed bag on the shelf) instead of having a spare burning power in the server. When a drive fails, you can remove it and put the new drive in where the old drive came out. I have tested it both ways and a vdev does not resilver faster with the failed drive still in the pool. If you have a RAIDz2 pool, it is best to just take the failed drive out and put a new drive in and resilver. The recovery is faster than trying to do it with the failed drive in place.
Still, I don't fault the 8 drive option and with all three vdevs populated, I think it would be acceptably fast even with your 10gib network.
I kind of feel like I rambled a bit there. I hope it makes sense.
Any recommendations for/against the following
- 4 way 6 disk Raid-Z2
- 3 way 8 disk Raid-Z2
If performance is more important than capacity, 4 vdevs, if capacity is more important, 3.
Record size can make a difference. I have mine set to 1m.Considering the server will only have media such as movies, music, photos I am leaning towards a 3 way 8 disk Raid-Z2.
My 2 htpc's running openelec will be connecting to the server via smb shares so I'm hoping the 3x8 will be plenty to play uninterrupted media to the htpc's.
Another question I have from my research is should I change the record size to 1M?
Three rows of eight disks.However, how do I create the vdev's inside the pool (3 vdev x 8 disks)?
No. The pool is the stripe of all vdevs. You can add vdevs at any time (but you cannot remove them, ever).Also, do I have the order of configuration correct? Pool creation -> dataset creation -> vdev creation