Expanding RaidZ1 with single disk

Status
Not open for further replies.

speegs87

Cadet
Joined
Aug 8, 2018
Messages
5
Hello All,

I am just looking for some quick clarification on the FreeNAS guide that covers expanding a 3 disk RAIDZ1 with a single VDEV (with a single disk).

I am fairly certain the system I am running was intially a 3 drive system that is now 4 drives, but the zpool status doesnt appear to be showing me 2 VDEVs, can someone confirm this? I dont exactly know what a 2 VDEV pool would look like. I just rebuilt the pool from a drive failure, and wanted to make sure i dont lose a drive that I cant recover from.

Here is the screenshot of my zpool status

Jon
 

Attachments

  • freenaszpool.PNG
    freenaszpool.PNG
    9.9 KB · Views: 333

garm

Wizard
Joined
Aug 19, 2017
Messages
1,556
The printout is a tree hierarchy. The line “raidz1-0” is your vdev. If you had more then one there would have been another “raidz1-1” “-2” “-3” etc.

You are correct that a pool have to be rebuilt to add another drive to an expiation raidz vdev.

Be advices, raidz1 is considered to be obsolete by the developers of OpenZFS due to the unacceptable high risk of a secondary bit error during a resilver. So if you are using raidz1 you need to be aware that the odds of having to restore a pool from backup high.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
With a 4 disk pool, you can't have more than 1 VDEV of the type RAIDZ1 (since RAIDZ1 requires at minimum 3 disks... the smallest possible 2-VDEV RAIDZ1 pool is 6 disks).

Generally, the forum recommends (as @garm already mentioned) that if you care about your data, you won't be using RAIDZ1, but rather RAIDZ2 or 3.
 

speegs87

Cadet
Joined
Aug 8, 2018
Messages
5
With a 4 disk pool, you can't have more than 1 VDEV of the type RAIDZ1 (since RAIDZ1 requires at minimum 3 disks... the smallest possible 2-VDEV RAIDZ1 pool is 6 disks).

Generally, the forum recommends (as @garm already mentioned) that if you care about your data, you won't be using RAIDZ1, but rather RAIDZ2 or 3.

I thought it was intially created as a 3 disk RAIDZ1 and then added a single disk in a RAID0 to it, which obviously is not recommended, as if that RAID0 disk fails the entire pool fails.

I would love to migrate to a RAIDZ2 or Z3 but migrating off and blowing up the RAID is a PITA.

I dont know why there is so much negativity around RAIDZ1, plenty of enterprise storage solutions still use RAID1, and as much as there is a risk around a bad block on another healthy device in the RG, ZFS can rebuild as much of the file system as it can right? some of the % ive seen thrown around for chances of RZ1 failure are pretty unrealistic from my experience
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
I don't know why there is so much negativity around RAIDZ1, plenty of enterprise storage solutions still use RAID1, ...

In traditional RAID terminology RAID1 is similar to ZFS's striped mirrors. It's not limited to a pair of disks, one can use additional disks in the mirror for fault tolerance. This is often used for block storage.

RAID5 is similar to RAIDZ1.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
I thought it was intially created as a 3 disk RAIDZ1 and then added a single disk in a RAID0 to it
If it was initially created as a three-disk RAIDZ1, then it was subsequently destroyed and recreated as a four-disk RAIDZ1. You don't have the striped disk you think you have.
You are correct that a pool have to be rebuilt to add another drive to an expiation raidz vdev.
...right now, anyway. Sounds like the work to add this is progressing pretty well, though.
 

speegs87

Cadet
Joined
Aug 8, 2018
Messages
5
In traditional RAID terminology RAID1 is similar to ZFS's striped mirrors. It's not limited to a pair of disks, one can use additional disks in the mirror for fault tolerance. This is often used for block storage.

RAID5 is similar to RAIDZ1.

sorry that was a typo, I meant plenty are using a RAIDZ1 like (like you mentioned RAID5)

and as much as adding a single drive is coming, i might as well migrate to RAIDZ2 or Z3 with enough capacity to get me through for a while.

I read in another post people are recommending expanding pools with VDEV mirrors? so adding 2 drives at a time to end up with a giant R10 like pool? Is that true? i like being able to add 2 drives to scale, but correct me if im wrong, but if you lose the wrong 2 drives (both drives in a single R1) then you lose the entire pool?
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504

garm

Wizard
Joined
Aug 19, 2017
Messages
1,556
I thought it was intially created as a 3 disk RAIDZ1 and then added a single disk in a RAID0 to it, which obviously is not recommended, as if that RAID0 disk fails the entire pool fails.

I would love to migrate to a RAIDZ2 or Z3 but migrating off and blowing up the RAID is a PITA.

I don't know why there is so much negativity around RAIDZ1, plenty of enterprise storage solutions still use RAID1, and as much as there is a risk around a bad block on another healthy device in the RG, ZFS can rebuild as much of the file system as it can right? some of the % I've seen thrown around for chances of RZ1 failure are pretty unrealistic from my experience
This is from OpenZFS.org
Two-disk failures are common enough[5] that raidz1 vdevs should not be used for data storage in production.

But I’m sure your experiments trumps that of the OpenZFS developers..
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
ZFS can rebuild as much of the file system as it can right?
If "as much as it can" is sufficient for your taste, then go ahead with RAIDZ1 - for some of us, our threshold for "acceptable amount of data to lose" is "zero bytes."

With regards to the "enterprise systems use RAID5" - yes, they do. Enterprise systems also tend to have aggressive patrol read/scrub schedules, more rigid requirements around hardware and firmware validation, additional measures of error correction (eg: 520bps drives with extra ECC) and end-to-end data protection (battery or flash-backed cache) and even these "enterprise systems" are recommending against RAID5 with larger/slower drives - they'll suggest RAID6 for NL-SAS generally, with smaller 15K drives or SSDs being given the OK for RAID5 use.
 
Last edited:

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
I read in another post people are recommending expanding pools with VDEV mirrors? so adding 2 drives at a time to end up with a giant R10 like pool? Is that true? i like being able to add 2 drives to scale, but correct me if im wrong, but if you lose the wrong 2 drives (both drives in a single R1) then you lose the entire pool?

With traditional mirrors (2 disks per vdev) that's correct. That's why I said that a ZFS mirror isn't limited to 2 disks. Depending on your risk tolerance one might look at 3 or 4 way mirrors. Sure the overhead is high, but cost for a FreeNAS server with commodity hard drives is cheap compared to a traditional SAN vendor.
 

speegs87

Cadet
Joined
Aug 8, 2018
Messages
5
This is from OpenZFS.org


But I’m sure your experiments trumps that of the OpenZFS developers..

this is not my recommendation, it was a comment from someone else on the forum. It seems odd recommending expanding pools with 2 mirrored devices in a giant RAID10 when youre almost no better off than a RAIDZ1.

Thanks for the help everyone!
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Yes, that's correct.

Which just means you have to restore from your backup. Which you have, since you should always have a backup and raid is not backup.
 
Status
Not open for further replies.
Top