ZPool Design

Status
Not open for further replies.

cfendya

Dabbler
Joined
Jul 8, 2013
Messages
10
hello everyone, noob here and wanting to make sure things are designed appropriately before going down any configurations.

I am looking at what the suggested practice is for laying out vdev's and zpools. I will have 8 x 2TB drives and initially thinking, I was figuring on splitting them into two vdevs of 4 drives in RAID-Z1 configuration. These vdevs would then comprise a single zpool providing 12TB of usable space.

My thinking was future growth in the ability to expand the zpool in smaller chunks (4 drives) vs if I had one huge vdev with 6 drives in a RAID-Z2 configuration. I know where possible, the suggested practice seems to be RAID-Z2 but figured with my above scenario and splitting things up into smaller vdevs, I'd gain the same protection of one large vdev in a RAID-Z2 configuration plus allow for easier expansion in the future.

Would there also be any benefit from an I/O perspective in this configuration as well? Thinking the comparison would be comparable to RAID 5 + 0 vs a RAID 6?

Thanks in advance for any guidance and suggestions.
 

cfendya

Dabbler
Joined
Jul 8, 2013
Messages
10
To add to the above, I built a disk layout scheme I've been thinking about. Again, trying to figure out what others are doing and what makes most sense from both a scalability and performance perspective. This will mostly just be used to serve up media (music, movies, and pictures).

disklayout.png
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
RAIDZ1 is somewhat considered 'dead". The uncorrectable error rates for modern hard drives are high enough that with a RAID array(and zpool) the likelihood of an error in the event that one disk fails is quite high. As far as I'm concerned, any vdev or zpool that has only 1 disk of redundancy is borderline unreliable. Quite a few people on this forum have lost significant amounts of data because they had 1 disk fail and during resilvering another had lots of errors. That issue can be mitigated with a RAIDZ2. With that in mind, I wouldn't consider the 1st, 3rd, or 4th option as that is against my policy for 1 disk of redundancy.

My recommendation is #2(6xRAIDZ2). If you plan to use the server for home use, a RAIDZ2 of 8 drives should be fine too.

Remember that adding more disks individually and maintaining parity data isn't possible, so plan ahead and if you aren't sure how much space you will need in the future, go big. Also keep in mind you should try to keep a zpool less than 80% full at all times, then there will be lost space due to decimal to binary conversion of what a Terabyte of data is, etc. If you build the array and then find yourself running out of space later its not cheap to "upgrade" to a much larger pool later.
 

cfendya

Dabbler
Joined
Jul 8, 2013
Messages
10
Thanks for the reply cyberjock! Your suggestions and ppt deck you put together has been instrumental in learning about freeNAS as well as what's best for my situation.

My only knock, or should I say con, of freeNAS is it's lack of ability to extend or grow a vdev. Creating a new/similar sized vdev can be expensive so I'm not sure your statement about upgrading is "cheap".

Isn't the recommended practice to keep vdevs within the same zpool similar in size?

Perhaps you're suggesting swapping out the 2TBs for larger drives within the same vdev? Doing this would you have to do all at the same time/time frame or is running a mix of hdd drive capacity for long periods supported?

Seems as though you could potentially cause hotspots within the zpool by adding in drives of mixed capacity and running for extended periods. I may be thinking about it too much for my home environment too though :) OCD section move? :)

Thank you again!!
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
My only knock, or should I say con, of freeNAS is it's lack of ability to extend or grow a vdev. Creating a new/similar sized vdev can be expensive so I'm not sure your statement about upgrading is "cheap".

Isn't the recommended practice to keep vdevs within the same zpool similar in size?

Perhaps you're suggesting swapping out the 2TBs for larger drives within the same vdev? Doing this would you have to do all at the same time/time frame or is running a mix of hdd drive capacity for long periods supported?

Seems as though you could potentially cause hotspots within the zpool by adding in drives of mixed capacity and running for extended periods. I may be thinking about it too much for my home environment too though :) OCD section move? :)

Thank you again!!

That's actually not a limitation of FreeNAS. The limitation is with ZFS.

It is preferred that the vdevs be kept to a similar number of disks. Disk size doesn't matter much.

I was referring to upgrading in any fashion. If you have to upgrade by replacing all of your disks with bigger disks, that's expensive. If you choose to create a new zpool, that can be expensive too. There is no cheap way to upgrade a zpool once it's made, so its important to make it right and make it big enough to keep you happy for as long as possible.

I have no idea what you mean by "hotspots". Any read or write to the zpool will require every single disk in that vdev to be read and/or written to.
 

cfendya

Dabbler
Joined
Jul 8, 2013
Messages
10
Makes sense about ZFS...Perhaps its on the futures path somewhere ;)

If disk size does not matter much within a vdev, an upgrade of a vdev can occur by replacing one disk at a time over a spread out period correct? Meaning I don't have to replace all at the same time and can gradually replace as space is required. I understand that performing the disk upgrade also means ensuring parity disks are also watched closely and upgraded as needed/required.

The hotspots I was talking about basically means one disk being used more than others due to the data that may exist in it. This more so relates to I/O and not so much space. My thought was if I do the upgrade path I just outlined and have lets just say 6 x 2TB drives and in a year upgrade one of those to a 4TB, that 4TB will eventually have more data than the others so it should be utilized (serving I/O) more than the others. Again, probably not the biggest issue in my situation more so a curiosity question on my part.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Makes sense about ZFS...Perhaps its on the futures path somewhere ;)

It's not.. LOL.

If disk size does not matter much within a vdev, an upgrade of a vdev can occur by replacing one disk at a time over a spread out period correct? Meaning I don't have to replace all at the same time and can gradually replace as space is required. I understand that performing the disk upgrade also means ensuring parity disks are also watched closely and upgraded as needed/required.

But you can't use the extra space until every single disk has been replaced. So why not just buy all of the disks at the same time instead of 1 a month over 6 months or whatever. You'll surely pay more for the first disk you buy than the last.

The hotspots I was talking about basically means one disk being used more than others due to the data that may exist in it. This more so relates to I/O and not so much space. My thought was if I do the upgrade path I just outlined and have lets just say 6 x 2TB drives and in a year upgrade one of those to a 4TB, that 4TB will eventually have more data than the others so it should be utilized (serving I/O) more than the others. Again, probably not the biggest issue in my situation more so a curiosity question on my part.

Nope. It doesn't work like that. It's as I said above, the disks in a vdev have the data evenly distributed(including parity). Any given write to the volume will require disk access on all of the disks.

You can't upgrade one to 4TB and get more space. You're always limited to the smallest disk in a vdev. So until all of the disks are 4TB you won't see any increase in space and the unused space on the 4TB can't be used for anything else. :(
 

cfendya

Dabbler
Joined
Jul 8, 2013
Messages
10
Noted at all areas and have much better understanding of things. Thanks again for all the great info.

You should suggest the ZFS expansion thing to someone up the chain. Enterprise offerings offer this type of expansion so it shouldn't be "that" difficult...Note - I'm no developer so please discount that statement with this in mind! :)

I suppose it makes sense not being able to take advantage of that new drive in a vdev based on the confines of how RAID works but I don't like it as it becomes extremely expensive to upgrade that single vdev or add a second vdev when the time comes.

That said and based off my situation here at home, I think what would make most sense is going with the last option being 2 separate small vdevs in a RAIDZ1 config. Again, I totally understand your logic but not knowing today my upgrade path and the current price points between 2TB vs 4TB; I feel having smaller set of vdevs means less of a risk with multiple drives failing within the same vdev.

Like most things, there are good and bad to everything and its up to each of us to take on the calculated risk that is best for that individual ;)

Thank you again and hopefully this thread is helpful to another n00b!
 
Status
Not open for further replies.
Top