removing suspect disk from striped pool

Status
Not open for further replies.

Dalton

Cadet
Joined
Sep 10, 2018
Messages
9
Hi all,
I have been creating my freenas server from old drives and the other day I added two 3tb drives I had lying around to an existing volume.

All my volumes are raid 0 as all the data is backed up so space is the main priority for me.

Anyway I am now getting warnings regarding one of the disks such as /dev/ada1, 104 Offline uncorrectable sectors, at this stage it is unlikely that data has been added to the offending drive and as the error states the disk is offline is there a way I can remove it without having to replace it with another 3tb drive? I have plenty of 1 or 2tb drives, could I replace it with one of those without doing any damage.

Finally if I simply disconnect it will I lose the full volume. I have tried searching for answers but no-one seems to be running the volume as a simple stripe without redundancy.

Thanks in advance for any help or advice provided.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
The only option you have is to destroy the pool and rebuild it from backups. This is why redundancy is about more than keeping data safe, it's about keeping things working. It's also extremely important to thoroughly test drives before putting them into service as you have discovered.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Actually, in 11.2, you can remove a disk from a striped pool (as long as the pool is healthy), though it'd have to be done from the command line. So if you don't mind using beta software, this is possible. Not that we recommend striped pools.
 

m0nkey_

MVP
Joined
Oct 27, 2015
Messages
2,739
Actually, in 11.2, you can remove a disk from a striped pool (as long as the pool is healthy), though it'd have to be done from the command line. So if you don't mind using beta software, this is possible. Not that we recommend striped pools.
Only possible if the following are true:
  • The pool has been upgraded and has the device_removal feature enabled.
  • There is enough free space on the other drives to accommodate the data that needs to be moved off the disk to be removed.
The only other way out is to attach a new drive and replace it.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Yes, it would be a one-way move--once the disk had been removed, you wouldn't be able to mount the pool in 11.1 or earlier. But still a possibility.

Unfortunately, disk/vdev removal doesn't work if any part of the pool is parity RAID, making it significantly less useful than it might otherwise be. And if I understand its implementation correctly, it would be highly non-trivial to add that capability. But it's still possible in some cases, and OP's could be one.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
turns out that RAID-Z VDEV remove is hard.
... Yes, that's what "highly non-trivial" means. ;)

The only other way out is to attach a new drive and replace it.
Technically you want to follow the instructions in 8.1.11 - "Replacing Drives to Grow a ZFS Pool":

http://doc.freenas.org/11/storage.html?highlight=replace#replacing-drives-to-grow-a-zfs-pool

@Dalton - Because the drive isn't completely failed (yet!) this effectively turns it into a mirror, then once the resilver is complete you can remove the failing drive from the "mirror"

But you'll still be left with whatever corruption was already in place, as well as a striped pool with zero disk redundancy.
 

Dalton

Cadet
Joined
Sep 10, 2018
Messages
9
... Yes, that's what "highly non-trivial" means. ;)


Technically you want to follow the instructions in 8.1.11 - "Replacing Drives to Grow a ZFS Pool":

http://doc.freenas.org/11/storage.html?highlight=replace#replacing-drives-to-grow-a-zfs-pool

@Dalton - Because the drive isn't completely failed (yet!) this effectively turns it into a mirror, then once the resilver is complete you can remove the failing drive from the "mirror"

But you'll still be left with whatever corruption was already in place, as well as a striped pool with zero disk redundancy.

Thanks for all your feedback, the above seems to be my best approach but will require a drive of similar or larger capacity which I dont have available at the moment. As everything is backed up I 'm not too concerned, the pool itself isnt showing any corruption so i think I will leave things as they are and keep a daily eye on things until I have a larger drive available. Thanks for all your comments and advice which will all be helpful when I do have a suitable drive available.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
turns out that RAID-Z VDEV remove is hard.
Yes, what you might call "non-trivial." (-: I'll have to watch the talk you linked, but I've watched a couple of others of his on vdev removal. I think I understand why it was done the way it was--it's both much simpler, and much faster, to move over the used blocks from the spacemap wholesale, than to deal with the block pointers (which is also why checksums aren't verified). But it precluding use with RAIDZn makes it much less useful.
 
Status
Not open for further replies.
Top