SOLVED Change out drive currently in a mirror - Correct steps?

Status
Not open for further replies.

toadman

Guru
Joined
Jun 4, 2013
Messages
619
All,

I've got a backup server that needs it's pool expanded. I am going to use a 3TB drive from an existing mirror on my primary server as the backup has a RAIDZ2 consisting of all 3TB drives. I'll have to destroy that pool and rebuild with an extra 3TB drive. That's the easy part.

Currently the mirror on the primary server is using one 3TB and one 4TB. So I'm buying an extra 4TB and will expand the pool on the primary server by growing the existing mirror. I can accomplish this by removing the 3TB and adding the new 4TB to the existing 4TB.

The issue is I'd prefer to add the new 4TB first, then remove the 3TB. That way I don't lose any redundancy on the mirror in the process. What I don't know is whether the pool will expand if I do it that way. i.e. when I pull the 3TB, leaving the two 4TB drives, will the pool expand? I think the answer is "yes." Anyone have reason to believe that is not the case? [I know it will work the other way, pull the 3TB first. Pool will expand at that point (now the mirror is a single drive vdev of 4TB). Attach new 4TB to that 4TB and create a mirror.]

This has to be done via command line (I think). After burning in the new 4TB drive the approach will be:

  • set autoexpand=on
  • ada5 is the new 4TB (same model as existing 4TB)
  • da7 is the existing 3TB, gptid is 4ba87812-ee5f-11e7-bde2-00505686d746
  • da8 is the existing 4TB, gptid is 4a406251-ee5f-11e7-bde2-00505686d746
  • Don't need swap on new drive as there is no swap on existing drives (I have swap elsewhere in the system)

Code:
# gpart create -s gpt /dev/ada5
# gpart add -i 1 -b 128 -t freebsd-zfs /dev/ada5
# glabel status	 [to find the gptid of the newly created partition. It is the gptid associated with ada5p1]
# zpool attach tank /dev/gptid/4a406251-ee5f-11e7-bde2-00505686d746 /dev/gptid/[gptid_of_the_new_partition]


Wait for resilver. Optionally run a scrub.

Code:
# zpool detach tank /dev/gptid/4ba87812-ee5f-11e7-bde2-00505686d746 (the 3TB drive)


Does that seem correct?
 

MrToddsFriends

Documentation Browser
Joined
Jan 12, 2015
Messages
1,338
The issue is I'd prefer to add the new 4TB first, then remove the 3TB. That way I don't lose any redundancy on the mirror in the process.

Sorry for answering with a question before going through your proposed procedure in detail.

The standard procedure for Replacing Drives to Grow a ZFS Pool involves using a free SATA/SAS port and adding the new disk first, the letting the vdev resilver and removing the old disk as a last step.

Why do you think your suggested procedure involving several manual steps would be even better/safer?
 

toadman

Guru
Joined
Jun 4, 2013
Messages
619
I obviously don't read very well. Total fail. :)

I'm used to replacing a failed drive. Just didn't go over it close enough on how replacing a working drive would work from the GUI. I suspect it does what I'm saying above behind the scenes. I go that route.

Thanks for pointing me to that part of the docs. Shame on me. Ugh.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
I suspect it does what I'm saying above behind the scenes.
Close. The GUI does zpool replace rather than zpool attach/detach.
 
Status
Not open for further replies.
Top