SOLVED Remove a vdev from a zpool

Status
Not open for further replies.

freenas-deluxe

Dabbler
Joined
Nov 23, 2013
Messages
17
Hi.

Scenario:
I run two 2TB Disks as a mirror-0, they form my pool "tank" (which is obviously 2GB in size).
Now both Disks started showing errors, so I bought two 3TB drives to replace them.
My idea was:
  1. make a mirror-1 from both 3TB drives
  2. make a mirror-2 from mirror-0 and mirror-1
  3. remove mirror-0 from mirror-2
  4. throw away the old disks.

What I actually did is:
  • extend my pool "tank" to 5GB by adding mirror-1 to the pool.


Now I find no way of removing the newly formed mirror-1 from the pool.
How can I remove the empty and new drives from the pool without destroying it?

zpool history tank shows:

2015-07-20.18:46:56 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 314245607353256281
2015-07-20.18:46:56 zpool set cachefile=/data/zfs/zpool.cache tank
2015-07-20.18:47:48 zfs set mountpoint=legacy tank/.system
2015-07-21.19:27:11 zpool add -f tank mirror /dev/gptid/b4580a46-2fcd-11e5-af9d-3cd92b06cdcb /dev/gptid/b51e45b8-2fcd-11e5-af9d-3cd92b06cdcb


# zpool status -v tank
pool: tank
state: ONLINE
scan: scrub repaired 0 in 5h57m with 0 errors on Sun Jul 19 08:57:31 2015
config:

NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gptid/e41b2476-22bd-11e2-ac2d-3cd92b06cdcb ONLINE 0 0 0
gptid/e4c521be-22bd-11e2-ac2d-3cd92b06cdcb ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
gptid/b4580a46-2fcd-11e5-af9d-3cd92b06cdcb ONLINE 0 0 0
gptid/b51e45b8-2fcd-11e5-af9d-3cd92b06cdcb ONLINE 0 0 0

errors: No known data errors
 
Last edited:

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
Unfortunately you can't remove the second mirror, now that it's been added to the existing pool.

You need to back up your data and recreate the new pool correctly.


Sent from my phone
 

freenas-deluxe

Dabbler
Joined
Nov 23, 2013
Messages
17
Wow. OK.
How do I backup about 20 zfs filesystems with multiple jails? Now that all four drives are installed, I cannot install a 5th.
An just removing the two new drives breakes the pool (tried that already).
Since there is no data on the new drives, there must be a way to just tell zfs to remove them after I attached them just minutes ago?

Would It be possible to:

  1. Remove one 3TB drive from mirror-1
  2. mirror mirror-0 to the now free 3TB drive.
  3. remove the old drives (mirror-0)
  4. mirror the new 3TB drive to the other 3TB drive
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Since there is no data on the new drives, there must be a way to just tell zfs to remove them after I attached them just minutes ago?

Nope. Once added the file system is immediately extended. The only way to undo it requires some hacking. :(

That's why in my noobie guide I specifically mention this exact scenario. Once a vdev is added, even if it was just 2 seconds ago, it is too late to undo the changes.
 

freenas-deluxe

Dabbler
Joined
Nov 23, 2013
Messages
17
After hitting my head against the wall several times to accept this, I still have to find a way to go forward.

The whole NAS is running on a HP N54L with four drive bays (now all occupied).
I have no other drives and even if I had, I could not connect them.
There is in total about 1.2 TB of data on the pool.

Could I restrict the size of the pool from 5TB to 1.5TB? Then with the idea explained above
  1. Remove one 3TB drive from mirror-1
  2. mirror mirror-0 to the now free 3TB drive.
  3. remove the old drives (mirror-0)
  4. mirror the new 3TB drive to the other 3TB drive

i could backup the pool.
Is that possible? Any other solutions?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Nope.

You cannot downsize the zpool, the vdev, or any member devices after it has been created.

You *have* to store the data on another disk, CD-ROM, some kind of backup.

You then *have* to destroy the zpool, recreate it how you want it, then copy the data back.

There's no way to try to reuse the disks, shrink them, or anything like that.
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
While you have the 4 bays populated, you could add / more HDD's if you install one of the hacked BIOS's for the N54L.

The official HP BIOS doesn't support AHCI on for the ODD and eSATA connections. The hacked BIOS's add AHCI support for them. Some users use a X wing gizmo to mount/stack the 2 additional drives in the ODD bay.


Sent from my phone
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
There's another option that doesn't involve using any more disks, but it's a bit hack-y, requires some CLI-fu, and threatens your redundancy. OTOH, if done correctly, all your metadata will be preserved. It looks like this:
  • Offline one of your 3 TB disks
  • Wipe that disk using something like DBAN
  • Create a new pool on that disk alone
  • Replicate your data to the new pool
  • Export the old pool, marking the disks as new
  • Add the second 3 TB disk as a mirror to the first
  • Optionally rename the pool to the old pool's name
There are a few dangers with this method:
  • If your second 3 TB disk fails after you remove the first one, your data's toast.
  • If your first 3 TB disk fails while you're in the process of adding the second 3 TB disk to the new pool (i.e., resilvering), your data's toast.
  • If you don't use the proper CLI-fu to add the second 3 TB disk as a mirror of the first, you could have problems in the short or long term.
 

freenas-deluxe

Dabbler
Joined
Nov 23, 2013
Messages
17
There's another option

Funny how two minds can think alike, that is exactly what I did and I like to write it down for further generations to come (or myself if I do it again)
;)

# zpool status -v tank
pool: tank
state: ONLINE

NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gptid/e41b2476-22bd-11e2-ac2d-3cd92b06cdcb ONLINE 0 0 0
gptid/e4c521be-22bd-11e2-ac2d-3cd92b06cdcb ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
gptid/b4580a46-2fcd-11e5-af9d-3cd92b06cdcb ONLINE 0 0 0
gptid/b51e45b8-2fcd-11e5-af9d-3cd92b06cdcb ONLINE 0 0 0



First you have to remove one of the new drives from the mirror.


zpool detach tank /dev/gptid/b51e45b8-2fcd-11e5-af9d-3cd92b06cdcb

Then, create a new pool on the just detached drive

zpool create sonne /dev/gptid/b51e45b8-2fcd-11e5-af9d-3cd92b06cdcb

Now replicate all your data from the old filesystem to the new. (snd/rcv can take a while)

zfs snapshot -r tank@nas_backup
zfs send -Rv tank@nas_backup | zfs receive -Fv sonne

To make FreeNAS aware of your newly created filesystem

zfs export sonne

Then, while in the WebGUI, click "Storage -> Import Volume" to import it.
Since I found no other way, I manually changed all path (user home dir, shares etc.) to their new values.
Reboot.


Then comes the scary part

zpool destroy tank
zpool attach sonne gptid/b51e45b8-2fcd-11e5-af9d-3cd92b06cdcb gptid/b4580a46-2fcd-11e5-af9d-3cd92b06cdcb

Then the resilvering will set in. Done.
Reminder: During the whole process, there is no redundancy. If any drive fails, your data is gone.

By the way, if anyone knows how to tell FreeNAS (or ZFS) to not use gptid in "zpool status", instead make it look like https://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/zfs-zpool.html, please PM me.


 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
By the way, if anyone knows how to tell FreeNAS (or ZFS) to not use gptid in "zpool status", instead make it look like https://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/zfs-zpool.html, please PM me.

You can't (unless you force ZFS to use device names but that's a bad idea) but you can always make you own script to patch the output of zpool status ;)

Note that device names can change from reboot to reboot, only the GPTID and the serial number are static, so be careful if you need to identify a drive.
 
Status
Not open for further replies.
Top