Noob questions about Volumes/vdevs correlations and safety

Status
Not open for further replies.

floc

Cadet
Joined
Jul 15, 2017
Messages
7
Hi guys,
this is my first experience with Freenas and I'm thrilled by the features it offers.

I read all docs (section 8 in particular), beginners guide, and ZFS primer but I still have some questions before the "live test". I think I have already my answers but I need to be 100% sure.

Scenario: I'm going to create a Volume (aka "Storage Pool") made by 5 different Raidz1, each of them created with 3 physical HDDs (different size for each Raidz1 but same size for HDDs of each stripe).

1st:
Safety issues. I presume that this configuration protects the Volume from a failure of max 1 disk of each Raidz1 in the pool (a total of 5 disk in the worst case, as long as the failed disks are each in one different stripe). Is this correct?

2nd:
Even if this is correct, I'm not that confident about this way because if I loose 1 stripe (i.e. in the unfortunate event of 2 disk loss in a single Raidz1) I'm gonna trash ALL my storage pool. Is this assumption correct? (Or can I recovery data from the healty survived stripes? According to the "noob guide", no, the volume will be totally crashed)

3rd (interesting only if 2nd is false, unlikely): :
How does ZFS span my data across different stripes? Fill the first stripe added and then write to the second? Does it make any kind of balance between all stripes?

4th (valid only if 2nd is false, unlikely):
I'm going to create multiple dataset, is there a way to "force" the system to use a specific stripe for each dataset? If so, it could provide the "one storage pool" flexibility advantages (i.e. variable size of datasets) along with the option to choose the "most reliable" stripe for critical data (I have some old disks, more prone to fail).

In conclusion, if 1st and 2nd assumptions are true as I suspect, I'm going to create different Volumes for each Raidz1. If a stripe fail I'm not going to loose all my data. Correct? :D


Thank you
 

melloa

Wizard
Joined
May 22, 2016
Messages
1,749
If you don't mind me asking... what size are your 15 drives?
 

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
Scenario: I'm going to create a Volume (aka "Storage Pool") made by 5 different Raidz1, each of them created with 3 physical HDDs (different size for each Raidz1 but same size for HDDs of each stripe).
The five different RAIDz1 "groups" are called Vdevs (virtual devices). The Vdevs are striped together to
form the pool or volume. Lose 2 drives in one RAIDz1 Vdev, pool data is gone!!!
Build a volume for fun, destroy it and build it a different way. Play with it, break it by pulling a data cable
and then learn to recover by replacing a drive. Read the manual cover to cover. Reread the noobs guide.
Commit to memory the Terminolgy and Abbreviations Primer by @jgreco
 

floc

Cadet
Joined
Jul 15, 2017
Messages
7
If you don't mind me asking... what size are your 15 drives?

mixed. 1, 2 or 3GB

The five different RAIDz1 "groups" are called Vdevs (virtual devices). The Vdevs are striped together to
form the pool or volume. Lose 2 drives in one RAIDz1 Vdev, pool data is gone!!!
Build a volume for fun, destroy it and build it a different way. Play with it, break it by pulling a data cable
and then learn to recover by replacing a drive. Read the manual cover to cover. Reread the noobs guide.
Commit to memory the Terminolgy and Abbreviations Primer by @jgreco

Thank you, all clear. I understood right and assumptions 1 and 2 are correct.

I already did replacing test, pulling cable, and other bad things to my test box. I played with a vdev but I have only 3 disks to play with, the real 15 are full of data on my windows server. That's why I'm asking :)
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Assumptions 1 and 2 are correct, as you say. Rather than five separate RAIDZ1 vdevs, consider a smaller number of RAIDZ2 vdevs.
 

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
the real 15 are full of data on my windows server. That's why I'm asking
Are the drives that are on the windows machine in RAID arrays? Or are they single disks?
 

floc

Cadet
Joined
Jul 15, 2017
Messages
7
Assumptions 1 and 2 are correct, as you say. Rather than five separate RAIDZ1 vdevs, consider a smaller number of RAIDZ2 vdevs.

Thanks. I considered Raidz2 but the loss of space could be a problem and the mix and match of different size disks a bit problematic.

Are the drives that are on the windows machine in RAID arrays? Or are they single disks?

a pretty similar layout, 5 different storage pools with parity on windows 8.1. Ugly.
 

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
a pretty similar layout, 5 different storage pools with parity on windows 8.1. Ugly.
You have obviously have given this some thought. Are the three drives you've been playing with,
the only drives you have available to start the rebuilding with?
 

melloa

Wizard
Joined
May 22, 2016
Messages
1,749
I considered Raidz2 but the loss of space could be a problem and the mix and match of different size disks a bit problematic.

I wouldn't use space loss as the decision factor. Consider the safety of your data #1.

the real 15 are full of data on my windows server.

I know money is always a problem - it is here, but consider getting new HDs for your FreeNAS server and move your data is phases.

The reason I've asked the sizes of your disks was to suggest multiple raidz2, but if (1) your disks are full of data and (2) the final available space won't be enough for your need - as you've stated above - I'd plan for a new build and than migrate. Better safe than sorry.
 

floc

Cadet
Joined
Jul 15, 2017
Messages
7
You have obviously have given this some thought. Are the three drives you've been playing with,
the only drives you have available to start the rebuilding with?

at the moment, yes. And that's why I don't have many choices

I wouldn't use space loss as the decision factor. Consider the safety of your data #1.

I know money is always a problem - it is here, but consider getting new HDs for your FreeNAS server and move your data is phases.

The reason I've asked the sizes of your disks was to suggest multiple raidz2, but if (1) your disks are full of data and (2) the final available space won't be enough for your need - as you've stated above - I'd plan for a new build and than migrate. Better safe than sorry.

Indeed your objections are solid, and I already considered. I'm still planning what to do trying to conciliate budget with safety.
 

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
ZFS RAID size and reliability calculator

This is a link to a very nice calculator that may help you come to some decisions regarding your new
configuration plans. Let us know if we can be of further help :)
 

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
In conclusion, if 1st and 2nd assumptions are true as I suspect, I'm going to create different Volumes for each Raidz1. If a stripe fail I'm not going to loose all my data. Correct?
I missed this when reading through the first time. If you create 5 separate pools with each pool containing
a single Vdev of 3 drives in a RAIDz1 cinfiguration, then you are correct! The failed pool would be your
only data loss, and the other pools would remain unaffected.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
I considered Raidz2 but the loss of space could be a problem and the mix and match of different size disks a bit problematic.
With five RAIDZ1 vdevs, you're losing five disks to redundancy. With two RAIDZ2 vdevs (which is what I'd suggest), you're losing four. ZFS will happily handle mismatched disk sizes in a vdev, but of course the vdev's capacity is going to be based on the smallest disk in the vdev.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
So, I would suggest letting us know how many of each size disk you have and perhaps we can suggest an optimal layout.

Because it's unlikely that 5x 3way raidz1 is Optimal.
 

floc

Cadet
Joined
Jul 15, 2017
Messages
7
StorageSpace1:
3 x 2.0TB 75% full

StorageSpace2:
3 x 2.0TB 70% full

StorageSpace3:
3 x 2.0TB 77% full

StorageSpace4:
2 x 2.0TB + 1 x 1.5TB 80% full

StorageSpace5:
3 x 1.5TB 75% full

Total:
11 x 2.0 TB
4 x 1.5 TB -> they will be replaced with 1 x 2.0 TB + 3 x 3.0 TB (my current play set)
 

diedrichg

Wizard
Joined
Dec 4, 2012
Messages
1,319
I didn't see it mentioned, how do you plan to migrate the data from the existing disks to a new system using the same disks?

With FreeNAS, you will want to keep your vdevs <80% full to avoid performance issues with ZFS.
 

floc

Cadet
Joined
Jul 15, 2017
Messages
7
this is a good question. I think i will plug a lan cable and just wait :| Do you guys have a better suggestion?

I'm aware of 80% issue of ZFS, thanks for mentioning it. With a bit of cleanup and new disks I think I will be safe
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
StorageSpace1:
3 x 2.0TB 75% full

StorageSpace2:
3 x 2.0TB 70% full

StorageSpace3:
3 x 2.0TB 77% full

StorageSpace4:
2 x 2.0TB + 1 x 1.5TB 80% full

StorageSpace5:
3 x 1.5TB 75% full

Total:
11 x 2.0 TB
4 x 1.5 TB -> they will be replaced with 1 x 2.0 TB + 3 x 3.0 TB (my current play set)

I would consider setting up a 9-way RaidZ2 with your first 9 2TB drives. And then a 6-way RaidZ2 with your last two storage spaces of mixed drives.

Will get you 14TB + 6TB, for 20TB.

Vs 4+4+4+3+3 = 18TB.

And you have *significantly* better redundancy.

Its a start for thinking about this at least.
 

floc

Cadet
Joined
Jul 15, 2017
Messages
7
good point. But I prefer more flexibility to expand my total capacity anytime soon and with 5 raidz1 of 3 drives is relatively simple.
Let alone the migration, no way I'm able to have 9 disks empty to assemble the 9way raidz2, unfortunately.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Well, consider some 6-way z2s then

1 6x2

purchase another 2TB, and make a second 6x2

Leaving you with 4 x 1.5TB

6-way RaidZ2 is a very good size for combining in larger pools

And I can't stress how much more redundant a 6-way RaidZ is compared to two 3 way RaidZ1s, even though it has the same amount of parity.

If you still can't work out how to get the data on, it is *possible* to make a 6 way RaidZ2 in a degraded state with only 4 drives... then you can copy 4 drives worth of data to it, and then add the remaining two drives. Its better if you can use 5 drives though ;)

(this is an advanced technique though)

Similarly, if you are set on buying 3TB drives, it is also possible to parition those drives into 1.5TB partitions, and mix that with your 4 1.5TB drives to make a six way RaidZ2, of course, there are perforrmance implications, and if the 3TB drive fails, you immediately lose full redundancy, but if you decide that your goal is to go from 1.5TB drives to 3TB drives, then its a good interim solution, IMO.

I would persoanlly use the 2 3TB drives in a mirror at the moment, and be aiming towards a pool consisting of 3 6-way RaidZ2 vdevs, but perhaps, if your storage is growing as fast as it seems it might've in the past, it would make sense to start thinking about 4TB drives rather than 3TB drives, after all, when you take into account per bay costs, the 4TB drives are cheaper.

An interesting puzzle... how to get your data into a 6-way RaidZ2 one chunk at a time.

...Something to think about... it is possible to make a 6-way sparse RaidZ2 out of just 2 2TB drives. The idea is you partition the drives, and make some sparse files as backing store. Create a RaidZ2, then offling the sparse files. Copy data to the array... then re-establish redundancy, and begin replacing partitions with actual drives. Once all the partitions have been replaced the array grows to its final size.
 
Last edited:
Status
Not open for further replies.
Top