Quick and Dirty: Creating a degraded raidz (3 of 4 drives, i.e.) to allow migration

Status
Not open for further replies.

livehifi

Cadet
Joined
May 27, 2011
Messages
2
=== Background:

I wanted to create a 4 disk RAIDZ with 2TB drives but one of those drives had a backup of my old setup. I set out to create a RAIDZ array for 4 drives yielding ~6TB space. To do this I had to create a degraded array, copy over data (whilst crossing my fingers that no drives failed in the process), then bring the recently migrated drive over to the array.

There are guides on how to do this with zfs, but not specifically for FreeNAS. Most guides I used caused kernel panics, and the only tips to bypass this were few and far between.

Thankfully, grzybowski in the #freenas IRC was helpful. BIG THANKS to him. :D

=== Warning:

As always with any FreeNAS stuff really... you may lose your stuff. Make backups. In this tutorial I did not even have my backup drive (one of the 2TB) attached. Only once I had my pool made did I plug it in and cp everything over... be careful.

=== Guide:

First, mount file system read write

# mount -uw /

Second, create a sparse file that will use no space, but represent the (large) disk we are faking, in this case 2TB. We overshoot it to be safe.

# dd if=/dev/zero of=/zfs1 bs=1 count=1 seek=2048G

Third, use md (I still don't understand this part, but it's the ace in the hole for FreeNAS ZFS) to avoid a kernel panic.

# mdconfig -a -t vnode -S 4096 -f /zfs1

Fourth, pay attention to what came after entering the previous command, in my case it was md3. We will now create the zpool using this command. The da0 da1 da2 are my physical drives so adjust as needed, probably to sda1 sda2 sda3 or what have you. Omni is the name of my zpool. Change as needed. Oh and the -f is there because it may detect some old zpools, adjust as needed.

# zpool create -f omni raidz da0 da1 da2 md3

Lastly, we want to make the fake disk "fail" so we'll take it offline.

# zpool offline omni md3

Oh and we can delete that sparse file.

# rm /zfs1

That should do it!

Keep in mind this zpool won't show up in the GUI unless you are in 8.2 BETA 4 or higher, since the GUI and CLI don't sync before that version, as far I know.

Edit: To have the zpool show up in GUI use:

# zpool export

Then use auto import in gui

If you have any questions post here... again this is quick and dirty =S.
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
# dd if=/dev/zero of=/zfs1 bs=1 count=1 seek=2048G

Third, use md (I still don't understand this part, but it's the ace in the hole for FreeNAS ZFS) to avoid a kernel panic.

# mdconfig -a -t vnode -S 4096 -f /zfs1

Fourth, pay attention to what came after entering the previous command, in my case it was md3. We will now create the zpool using this command. The da0 da1 da2 are my physical drives so adjust as needed, probably to sda1 sda2 sda3 or what have you. Omni is the name of my zpool. Change as needed. Oh and the -f is there because it may detect some old zpools, adjust as needed..

The reason for this is because ZFS needs a DEVICE to work with. The step where you did "dd" creates a FILE, so the mdconfig links the file to a virtual device.

There was a thread here a couple of years back where I suggested this procedure for someone trying what you wanted to do. I didn't have the time to follow through with the details and the user didn't have the background to figure it out.

Nice Job!
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
Status
Not open for further replies.
Top