Removing a vdev from freenas-boot

ericderace

Cadet
Joined
Feb 27, 2020
Messages
2
While tinkering around with zfs, freenas and a homemade NAS - I have driven myself into an issue. (please note this post is for learning/tinkering purposes only)

My freenas-root pool is striped across 2 vdevs:
Code:
root@freenas[~]# zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: resilvered 8.88M in 0 days 00:05:08 with 0 errors on Thu Feb 27 20:24:14 2020
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da1p2     ONLINE       0     0     0
      da0p2     ONLINE       0     0     0

errors: No known data errors


What I want to do, is remove(?) da1p2 from that pool. However,
Code:
root@freenas[~]# zpool remove freenas-boot da1p2
cannot remove da1p2: root pool can not have removed devices, because GRUB does not understand them


I've reproduced this on a Ubuntu 19.10 setup, where I created a pool from 2 vdevs and was able to execute that command successfully. What am I missing here? Is there a way to force this? How does zpool know this is a root pool? Can I set a property to override this (temporarily)? Is zpool remove not the right command to use? Is what I am trying to do even possible? Is this a feature of ZoL that is not present in FreeNAS/FreeBSD?

Thanks for your input.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Just back up your config, rebuild on one of the disks and restore the config... it will be cleaner and probably is the only real option.

If you have a backup of your config, you might try the command with detach instead of remove.

It is entirely unclear how you could have arrived at having a striped boot pool, since there is no option for that in the install, nor in the GUI, so perhaps it's actually a mirror, hence the detach.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
You can remove a drive from a mirror vdev, but not a vdev from pool, sorry. Data is spread across the two drives on a per block basis. So removal of one would lead to loss of everything, Backup and reinstall it is.
 

ericderace

Cadet
Joined
Feb 27, 2020
Messages
2
You can remove a drive from a mirror vdev, but not a vdev from pool, sorry. Data is spread across the two drives on a per block basis. So removal of one would lead to loss of everything, Backup and reinstall it is.

I think it used to be impossible to remove a vdev, but since recently it is. I've tried it here:

Code:
root@freenas[~]# zpool create flashpool da1 da4
root@freenas[~]# zpool status flashpool
  pool: flashpool
 state: ONLINE
  scan: none requested
config:

    NAME        STATE     READ WRITE CKSUM
    flashpool   ONLINE       0     0     0
      da1       ONLINE       0     0     0
      da4       ONLINE       0     0     0

errors: No known data errors
root@freenas[~]# zpool remove flashpool da4
root@freenas[~]# zpool status flashpool
  pool: flashpool
 state: ONLINE
  scan: none requested
remove: Removal of vdev 1 copied 196K in 0h0m, completed on Fri Feb 28 12:15:05 2020
    192 memory used for removed device mappings
config:

    NAME          STATE     READ WRITE CKSUM
    flashpool     ONLINE       0     0     0
      da1         ONLINE       0     0     0

errors: No known data errors


While digging my hole further, I've reset the bootfs proprety to none (zpool set bootfs= freenas-boot). Then, the error message I got while trying to remove the vdev was different.

Code:
root@freenas[~]# zpool remove freenas-boot da1p2
cannot remove da1p2: invalid config; all top-level vdevs must have the same sector size and not be raidz.


I believe this is because, somehow, I've ended up with one vdev with ashift=9, and the other with ashift=12.

output of zdb:
Code:
freenas-boot:
    version: 5000
    name: 'freenas-boot'
    state: 0
    txg: 137864
    pool_guid: 18034547575130841808
    hostid: 2009446742
    hostname: 'freenas.local'
    com.delphix:has_per_vdev_zaps
    vdev_children: 2
    vdev_tree:
        type: 'root'
        id: 0
        guid: 18034547575130841808
        create_txg: 4
        children[0]:
            type: 'disk'
            id: 0
            guid: 12743127922264418135
            path: '/dev/da2p2'
            whole_disk: 1
            metaslab_array: 38
            metaslab_shift: 29
            ashift: 9
            asize: 15832973312
            is_log: 0
            DTL: 236
            create_txg: 4
            com.delphix:vdev_zap_leaf: 235
            com.delphix:vdev_zap_top: 35
        children[1]:
            type: 'disk'
            id: 1
            guid: 11090465952580780516
            path: '/dev/da0p2'
            whole_disk: 1
            metaslab_array: 127
            metaslab_shift: 29
            ashift: 12
            asize: 15832973312
            is_log: 0
            DTL: 141
            create_txg: 60583
            com.delphix:vdev_zap_leaf: 126
            com.delphix:vdev_zap_top: 124
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data


I'll be restoring from backup. Thanks for the input
 
Top