"zpool remove": Why does the following not work?

sunshine931

Explorer
Joined
Jan 23, 2018
Messages
54
I'm looking to clarify what is likely a misunderstanding on my end.

To make a long story short, I have a pool that consists of many vdevs, all mirrors. I accidentally added a new mirror vdev to the pool and wish to remove that vdev from the pool.

I've read the zpool man page, and came across this:
Code:

     zpool remove [-np] pool device ...

         Removes the specified device from the pool.  This command currently
         only supports removing hot spares, cache, log devices and mirrored
         top-level vdevs (mirror of leaf devices); but not raidz.

         Removing a top-level vdev reduces the total amount of space in the
         storage pool.  The specified device will be evacuated by copying all
         allocated space from it to the other devices in the pool.  In this
         case, the zpool remove command initiates the removal and returns,
         while the evacuation continues in the background.  The removal
         progress can be monitored with zpool status. This feature must be
         enabled to be used, see zpool-features(5)

         A mirrored top-level device (log or data) can be removed by
         specifying the top-level mirror for the same.  Non-log devices or
         data devices that are part of a mirrored configuration can be removed
         using the "zpool detach" command.

         -n      Do not actually perform the removal ("no-op").  Instead,
                 print the estimated amount of memory that will be used by the
                 mapping table after the removal completes.  This is nonzero
                 only for top-level vdevs.

         -p      Used in conjunction with the -n flag, displays numbers as
                 parsable (exact) values.


This made me hopeful - I don't use raidz, I use mirrored pairs for my vdevs. So, I tried the following and am receiving an error:
Code:
[root@omega ~]# zpool remove nebula mirror-8
cannot remove mirror-8: invalid config; all top-level vdevs must have the same sector size and not be raidz.
[root@omega ~]#


I'm looking at this error and don't understand what it's trying to tell me. I cannot find instructions around how to determine a vdev sector size (wouldn't this be the device's sector size?), and my pool is not raidz (it's a collection of mirrored pair vdevs).

Looking at my physical devices, all the vdevs have drives (including the accidentally added vdev drives) with 512 byte sectors, as per "diskinfo -v".

I'm guessing I'm missing something in my understanding of what this means. Can someone clarify please?

Thanks!
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
You are trying to remove a standard vdev but what you are reading about is how to remove a log or cache. It is not the same.
You are being told no because if it let you do what you are trying to do it would destroy the entire pool.
I think we already discussed this...
 

sunshine931

Explorer
Joined
Jan 23, 2018
Messages
54
Hi Chris - indeed we did! I'm waiting on the delivery of some additional disks (which I'll use to replace the accidentally added vdev, failing its removal).

What's confusing me here is the "zpool remove" man page appears to addresses my situation. You mention "zpool remove" is limited to log and cache devices, but the man page mentions "mirrored top-level vdevs".

This command currently only supports removing hot spares, cache, log devices and mirrored top-level vdevs (mirror of leaf devices); but not raidz.

I appreciate your input on this Chris, and figure I'm misunderstanding something. Is "mirror-8" (in my pool described below) not a "mirrored top-level vdev"?

Code:
[root@omega ~]# zpool status -v nebula   
  pool: nebula
state: ONLINE
  scan: none requested
config:

        NAME                                            STATE     READ WRITE CKSUM
        nebula                                          ONLINE       0     0     0
          mirror-0                                      ONLINE       0     0     0
            gptid/2fdc125f-15f4-11e9-b1b0-000c29f308bf  ONLINE       0     0     0
            gptid/30816c63-15f4-11e9-b1b0-000c29f308bf  ONLINE       0     0     0
          mirror-1                                      ONLINE       0     0     0
            gptid/31403c46-15f4-11e9-b1b0-000c29f308bf  ONLINE       0     0     0
            gptid/31f1f182-15f4-11e9-b1b0-000c29f308bf  ONLINE       0     0     0
          mirror-2                                      ONLINE       0     0     0
            gptid/32a4cfa3-15f4-11e9-b1b0-000c29f308bf  ONLINE       0     0     0
            gptid/3356fcab-15f4-11e9-b1b0-000c29f308bf  ONLINE       0     0     0
          mirror-3                                      ONLINE       0     0     0
            gptid/341311e6-15f4-11e9-b1b0-000c29f308bf  ONLINE       0     0     0
            gptid/34c9952c-15f4-11e9-b1b0-000c29f308bf  ONLINE       0     0     0
          mirror-4                                      ONLINE       0     0     0
            gptid/3580b3ad-15f4-11e9-b1b0-000c29f308bf  ONLINE       0     0     0
            gptid/364188ba-15f4-11e9-b1b0-000c29f308bf  ONLINE       0     0     0
          mirror-5                                      ONLINE       0     0     0
            gptid/370908e5-15f4-11e9-b1b0-000c29f308bf  ONLINE       0     0     0
            gptid/37cf00a5-15f4-11e9-b1b0-000c29f308bf  ONLINE       0     0     0
          mirror-6                                      ONLINE       0     0     0
            gptid/388e6fef-15f4-11e9-b1b0-000c29f308bf  ONLINE       0     0     0
            gptid/3945fee7-15f4-11e9-b1b0-000c29f308bf  ONLINE       0     0     0
          mirror-7                                      ONLINE       0     0     0
            gptid/3a08fb45-15f4-11e9-b1b0-000c29f308bf  ONLINE       0     0     0
            gptid/3ad7a643-15f4-11e9-b1b0-000c29f308bf  ONLINE       0     0     0
          mirror-8                                      ONLINE       0     0     0
            da33p1                                      ONLINE       0     0     0
            da34p1                                      ONLINE       0     0     0

errors: No known data errors
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
What's confusing me here is the "zpool remove" man page appears to addresses my situation. You mention "zpool remove" is limited to log and cache devices, but the man page mentions "mirrored top-level vdevs".
Because you can mirror slog and l2arc devices. Didn't I point you at this:
https://forums.freenas.org/index.ph...s-partitioned-for-two-pools.62787/post-483761
I added and removed log and cache devices from the command line for testing. It is easily done for those, but once a storage vdev goes into the pool, it becomes a member of the pool and is not able to be removed.
Here is an example of my home NAS pool layout:
Code:
  pool: Emily
state: ONLINE
  scan: scrub repaired 0 in 0 days 05:23:05 with 0 errors on Tue Nov 13 05:23:07 2018
config:

        NAME                                            STATE     READ WRITE CKSUM
        Emily                                           ONLINE       0     0     0
          raidz2-0                                      ONLINE       0     0     0
            gptid/af7c42c6-bf05-11e8-b5f3-0cc47a9cd5a4  ONLINE       0     0     0
            gptid/b07bc723-bf05-11e8-b5f3-0cc47a9cd5a4  ONLINE       0     0     0
            gptid/b1893397-bf05-11e8-b5f3-0cc47a9cd5a4  ONLINE       0     0     0
            gptid/b2bfc678-bf05-11e8-b5f3-0cc47a9cd5a4  ONLINE       0     0     0
            gptid/b3c1849e-bf05-11e8-b5f3-0cc47a9cd5a4  ONLINE       0     0     0
            gptid/b4d16ad2-bf05-11e8-b5f3-0cc47a9cd5a4  ONLINE       0     0     0
          raidz2-1                                      ONLINE       0     0     0
            gptid/bc1e50e5-c1fa-11e8-87f0-0cc47a9cd5a4  ONLINE       0     0     0
            gptid/a03dd690-c1fb-11e8-87f0-0cc47a9cd5a4  ONLINE       0     0     0
            gptid/a6ed2ed5-c240-11e8-87f0-0cc47a9cd5a4  ONLINE       0     0     0
            gptid/b9de3232-bf05-11e8-b5f3-0cc47a9cd5a4  ONLINE       0     0     0
            gptid/baf4aba8-bf05-11e8-b5f3-0cc47a9cd5a4  ONLINE       0     0     0
            gptid/bbf26621-bf05-11e8-b5f3-0cc47a9cd5a4  ONLINE       0     0     0
        logs
          gptid/ae487c50-bec3-11e8-b1c8-0cc47a9cd5a4    ONLINE       0     0     0
        cache
          gptid/ae52d59d-bec3-11e8-b1c8-0cc47a9cd5a4    ONLINE       0     0     0

errors: No known data errors

If you notice the indentation of the log and cache are at the same level as the regular vdev (raidz2) level.
 

sunshine931

Explorer
Joined
Jan 23, 2018
Messages
54
Because you can mirror slog and l2arc devices. Didn't I point you at this:
https://forums.freenas.org/index.ph...s-partitioned-for-two-pools.62787/post-483761
I added and removed log and cache devices from the command line for testing. It is easily done for those, but once a storage vdev goes into the pool, it becomes a member of the pool and is not able to be removed.

Yes, thanks again for that info. I am familiar with removing cache and log devices from pools, including mirrored cache and log devices, and have done this many times over the years with my pools. Apparently I need more practice though as I typo'd that command on my production pool and accidentally added that new mirror vdev :)

Anyhow, please look carefully at the man page snippets I've provided below.

I think something has changed relatively recently in FreeBSD's ZFS capabilities. Look at the difference in language for the same section of the "zpool remove" man page. The "mirrored top-level vdev" is a NEW addition, as is "A mirrored top-level device (log or data) can be removed by specifying the top-level mirror for the same."


11.0-RELEASE:
https://www.freebsd.org/cgi/man.cgi...FreeBSD+11.0-RELEASE&arch=default&format=html
Code:
     zpool remove pool device ...

     Removes the specified device from the pool. This command currently
     only supports removing    hot spares, cache, and log devices. A mirrored
     log device can    be removed by specifying the top-level mirror for the
     log. Non-log devices that are part of a mirrored configuration    can be
     removed using the "zpool detach" command. Non-redundant and raidz
     devices cannot    be removed from    a pool.


11.2-RELEASE:
https://www.freebsd.org/cgi/man.cgi...FreeBSD+11.2-RELEASE&arch=default&format=html
Code:
zpool remove [-np]    pool device ...

     Removes the specified device from the pool.  This command currently
     only supports removing    hot spares, cache, log devices and mirrored
     top-level vdevs (mirror of leaf devices); but not raidz.

     Removing a top-level vdev reduces the total amount of space in    the
     storage pool.    The specified device will be evacuated by copying all
     allocated space from it to the    other devices in the pool.  In this
     case, the zpool remove    command    initiates the removal and returns,
     while the evacuation continues    in the background.  The    removal
     progress can be monitored with    zpool status. This feature must    be
     enabled to be used, see zpool-features(5)

     A mirrored top-level device (log or data) can be removed by specify-
     ing the top-level mirror for the same.     Non-log devices or data
     devices that are part of a mirrored configuration can be removed
     using the "zpool detach" command.

     -n     Do not    actually perform the removal ("no-op").     Instead,
         print the estimated amount of memory that will    be used    by the
         mapping table after the removal completes.  This is nonzero
         only for top-level vdevs.

     -p     Used in conjunction with the -n flag, displays    numbers    as
         parsable (exact) values.



From the FreeBSD 11.2 zpool-features man page, we can see they reference the new capability:

https://www.freebsd.org/cgi/man.cgi...FreeBSD+11.2-RELEASE&arch=default&format=html

Code:
device_removal

           GUID               com.delphix:device_removal
           READ-ONLY COMPATIBLE    no
           DEPENDENCIES           none

           This feature enables the    "zpool remove" subcommand to remove
           top-level vdevs,    evacuating them    to reduce the total size of
           the pool.

           This feature becomes active when    the "zpool remove" command is
           used on a top-level vdev, and will never    return to being
           enabled.

     obsolete_counts

           GUID               com.delphix:obsolete_counts
           READ-ONLY COMPATIBLE    yes
           DEPENDENCIES           device_removal

           This feature is an enhancement of device_removal, which will
           over time reduce    the memory used    to track removed devices.
           When indirect blocks are    freed or remapped, we note that    their
           part of the indirect mapping is "obsolete", i.e.    no longer
           needed.    See also the "zfs remap" subcommand in zfs(8).

           This feature becomes active when    the "zpool remove" command is
           used on a top-level vdev, and will never    return to being
           enabled.



Interestingly, the "zpool" changes were NOT referenced in the FreeBSD release notes, and a bug has been opened to have them added to the release notes:

https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=229545
 

sunshine931

Explorer
Joined
Jan 23, 2018
Messages
54
Forgot to mention - I thought my error (when running my "zpool remove" command in the top of this post) could be related to missing zfs features on my pool.

I've confirmed that I have the following features enabled (see the "<---" lines):

Code:
nebula  feature@async_destroy          enabled                        local
nebula  feature@empty_bpobj            active                         local
nebula  feature@lz4_compress           active                         local
nebula  feature@multi_vdev_crash_dump  enabled                        local
nebula  feature@spacemap_histogram     active                         local
nebula  feature@enabled_txg            active                         local
nebula  feature@hole_birth             active                         local
nebula  feature@extensible_dataset     enabled                        local
nebula  feature@embedded_data          active                         local
nebula  feature@bookmarks              enabled                        local
nebula  feature@filesystem_limits      enabled                        local
nebula  feature@large_blocks           enabled                        local
nebula  feature@sha512                 enabled                        local
nebula  feature@skein                  enabled                        local
nebula  feature@device_removal         enabled                        local  <----
nebula  feature@obsolete_counts        enabled                        local  <----
nebula  feature@zpool_checkpoint       enabled                        local
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
You are being told no because if it let you do what you are trying to do it would destroy the entire pool.
No, it doesn't. Starting in 11.2, you can remove top-level vdevs, as long as all vdevs in the pool are either single disks or mirrors. It is not limited to log/cache vdevs. And I'm pretty sure we've been through this before.
zpool remove nebula mirror-8
I believe you'd instead remove the individual disks, but I've not personally used this feature to say for sure. So I'd try zpool remove nebula gptid/whatever for the first disk in the mirror, then repeat with the second.
 

sunshine931

Explorer
Joined
Jan 23, 2018
Messages
54
I believe you'd instead remove the individual disks, but I've not personally used this feature to say for sure. So I'd try zpool remove nebula gptid/whatever for the first disk in the mirror, then repeat with the second.

Thanks for the suggestion Dan, I've tried the following variations and no luck:

Code:
[root@omega ~]# zpool remove nebula da33p1
cannot remove da33p1: operation not supported on this type of pool
[root@omega ~]# 
[root@omega ~]# zpool remove nebula mirror-8 da33p1
cannot remove mirror-8: invalid config; all top-level vdevs must have the same sector size and not be raidz.
cannot remove da33p1: operation not supported on this type of pool
[root@omega ~]# 
[root@omega ~]# zpool remove nebula mirror-8 da33p1 da34p1
cannot remove mirror-8: invalid config; all top-level vdevs must have the same sector size and not be raidz.
cannot remove da33p1: operation not supported on this type of pool
cannot remove da34p1: operation not supported on this type of pool
[root@omega ~]# 
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
You know, if I'd paid more attention to the error message you got the first time, I might not have suggested the individual disk devices. The issue appears to be that there are differing sector sizes among your vdevs, but I have no idea how you'd track that down.
 

sunshine931

Explorer
Joined
Jan 23, 2018
Messages
54
I'm confused by that too. The vdevs are all created from the same 3TB Hitachi disks (with the exception of the mirror-8 vdev, which is on SSD).

They're all 512 byte disks (the SSDs and the HDDs), and I'm at a loss as to how I'd examine my vdev sector size. I assume it's an attribute of the disk, not the vdev.. but I'm not sure. I've been searching for a way to determine this, but haven't found anything that looks relevant.
 

sunshine931

Explorer
Joined
Jan 23, 2018
Messages
54
You know, if I'd paid more attention to the error message you got the first time, I might not have suggested the individual disk devices. The issue appears to be that there are differing sector sizes among your vdevs, but I have no idea how you'd track that down.

Perhaps this output can help spot the issue? I'm not sure what I'd be looking for, but notably, the ashift value for all vdevs is 12.

Code:
[root@omega ~]# zdb -U /data/zfs/zpool.cache
nebula:
    version: 5000
    name: 'nebula'
    state: 0
    txg: 287153
    pool_guid: 9027025247116299332
    hostid: 2711897173
    hostname: 'omega.nebula.pw'
    com.delphix:has_per_vdev_zaps
    vdev_children: 9
    vdev_tree:
        type: 'root'
        id: 0
        guid: 9027025247116299332
        create_txg: 4
        children[0]:
            type: 'mirror'
            id: 0
            guid: 2475291395984221352
            metaslab_array: 74
            metaslab_shift: 34
            ashift: 12
            asize: 2998440558592
            is_log: 0
            create_txg: 4
            com.delphix:vdev_zap_top: 36
            children[0]:
                type: 'disk'
                id: 0
                guid: 17373316851745507706
                path: '/dev/gptid/2fdc125f-15f4-11e9-b1b0-000c29f308bf'
                phys_path: 'id1,enc@n50050cc1020371ce/type@0/slot@2/p2'
                whole_disk: 1
                create_txg: 4
                com.delphix:vdev_zap_leaf: 37
            children[1]:
                type: 'disk'
                id: 1
                guid: 2135277072910166547
                path: '/dev/gptid/30816c63-15f4-11e9-b1b0-000c29f308bf'
                phys_path: 'id1,enc@n50050cc1020371ce/type@0/slot@18/p2'
                whole_disk: 1
                create_txg: 4
                com.delphix:vdev_zap_leaf: 38
        children[1]:
            type: 'mirror'
            id: 1
            guid: 7140999640092493264
            metaslab_array: 70
            metaslab_shift: 34
            ashift: 12
            asize: 2998440558592
            is_log: 0
            create_txg: 4
            com.delphix:vdev_zap_top: 39
            children[0]:
                type: 'disk'
                id: 0
                guid: 3278499450693351653
                path: '/dev/gptid/31403c46-15f4-11e9-b1b0-000c29f308bf'
                phys_path: 'id1,enc@n50050cc1020371ce/type@0/slot@3/p2'
                whole_disk: 1
                create_txg: 4
                com.delphix:vdev_zap_leaf: 40
            children[1]:
                type: 'disk'
                id: 1
                guid: 16630868740606607486
                path: '/dev/gptid/31f1f182-15f4-11e9-b1b0-000c29f308bf'
                phys_path: 'id1,enc@n50050cc1020371ce/type@0/slot@17/p2'
                whole_disk: 1
                create_txg: 4
                com.delphix:vdev_zap_leaf: 41
        children[2]:
            type: 'mirror'
            id: 2
            guid: 16388418767579747938
            metaslab_array: 69
            metaslab_shift: 34
            ashift: 12
            asize: 2998440558592
            is_log: 0
            create_txg: 4
            com.delphix:vdev_zap_top: 42
            children[0]:
                type: 'disk'
                id: 0
                guid: 3445834838771427133
                path: '/dev/gptid/32a4cfa3-15f4-11e9-b1b0-000c29f308bf'
                phys_path: 'id1,enc@n50050cc1020371ce/type@0/slot@4/p2'
                whole_disk: 1
                create_txg: 4
                com.delphix:vdev_zap_leaf: 43
            children[1]:
                type: 'disk'
                id: 1
                guid: 8383006822096954820
                path: '/dev/gptid/3356fcab-15f4-11e9-b1b0-000c29f308bf'
                phys_path: 'id1,enc@n50050cc1020371ce/type@0/slot@16/p2'
                whole_disk: 1
                create_txg: 4
                com.delphix:vdev_zap_leaf: 44
        children[3]:
            type: 'mirror'
            id: 3
            guid: 8267010056932391870
            metaslab_array: 68
            metaslab_shift: 34
            ashift: 12
            asize: 2998440558592
            is_log: 0
            create_txg: 4
            com.delphix:vdev_zap_top: 45
            children[0]:
                type: 'disk'
                id: 0
                guid: 12532138712279008861
                path: '/dev/gptid/341311e6-15f4-11e9-b1b0-000c29f308bf'
                phys_path: 'id1,enc@n50050cc1020371ce/type@0/slot@9/p2'
                whole_disk: 1
                create_txg: 4
                com.delphix:vdev_zap_leaf: 46
            children[1]:
                type: 'disk'
                id: 1
                guid: 17317509738493107149
                path: '/dev/gptid/34c9952c-15f4-11e9-b1b0-000c29f308bf'
                phys_path: 'id1,enc@n50050cc1020371ce/type@0/slot@15/p2'
                whole_disk: 1
                create_txg: 4
                com.delphix:vdev_zap_leaf: 47
        children[4]:
            type: 'mirror'
            id: 4
            guid: 4870841244006115744
            metaslab_array: 67
            metaslab_shift: 34
            ashift: 12
            asize: 2998440558592
            is_log: 0
            create_txg: 4
            com.delphix:vdev_zap_top: 48
            children[0]:
                type: 'disk'
                id: 0
                guid: 3612732468357025476
                path: '/dev/gptid/3580b3ad-15f4-11e9-b1b0-000c29f308bf'
                phys_path: 'id1,enc@n50050cc1020371ce/type@0/slot@a/p2'
                whole_disk: 1
                create_txg: 4
                com.delphix:vdev_zap_leaf: 49
            children[1]:
                type: 'disk'
                id: 1
                guid: 13075905208890233344
                path: '/dev/gptid/364188ba-15f4-11e9-b1b0-000c29f308bf'
                phys_path: 'id1,enc@n50050cc1020371ce/type@0/slot@1/p2'
                whole_disk: 1
                create_txg: 4
                com.delphix:vdev_zap_leaf: 50
        children[5]:
            type: 'mirror'
            id: 5
            guid: 3191970884934072143
            metaslab_array: 66
            metaslab_shift: 34
            ashift: 12
            asize: 2998440558592
            is_log: 0
            create_txg: 4
            com.delphix:vdev_zap_top: 51
            children[0]:
                type: 'disk'
                id: 0
                guid: 12691589577473468864
                path: '/dev/gptid/370908e5-15f4-11e9-b1b0-000c29f308bf'
                phys_path: 'id1,enc@n50050cc1020371ce/type@0/slot@b/p2'
                whole_disk: 1
                create_txg: 4
                com.delphix:vdev_zap_leaf: 52
            children[1]:
                type: 'disk'
                id: 1
                guid: 14524241501647678296
                path: '/dev/gptid/37cf00a5-15f4-11e9-b1b0-000c29f308bf'
                phys_path: 'id1,enc@n50050cc1020371ce/type@0/slot@10/p2'
                whole_disk: 1
                create_txg: 4
                com.delphix:vdev_zap_leaf: 53
        children[6]:
            type: 'mirror'
            id: 6
            guid: 12261413401383707857
            metaslab_array: 65
            metaslab_shift: 34
            ashift: 12
            asize: 2998440558592
            is_log: 0
            create_txg: 4
            com.delphix:vdev_zap_top: 54
            children[0]:
                type: 'disk'
                id: 0
                guid: 18432269624210025455
                path: '/dev/gptid/388e6fef-15f4-11e9-b1b0-000c29f308bf'
                phys_path: 'id1,enc@n50050cc1020371ce/type@0/slot@c/p2'
                whole_disk: 1
                create_txg: 4
                com.delphix:vdev_zap_leaf: 55
            children[1]:
                type: 'disk'
                id: 1
                guid: 3171251493808063131
                path: '/dev/gptid/3945fee7-15f4-11e9-b1b0-000c29f308bf'
                phys_path: 'id1,enc@n50050cc1020371ce/type@0/slot@f/p2'
                whole_disk: 1
                create_txg: 4
                com.delphix:vdev_zap_leaf: 56
        children[7]:
            type: 'mirror'
            id: 7
            guid: 8744912747504544288
            metaslab_array: 60
            metaslab_shift: 34
            ashift: 12
            asize: 2998440558592
            is_log: 0
            create_txg: 4
            com.delphix:vdev_zap_top: 57
            children[0]:
                type: 'disk'
                id: 0
                guid: 1487363212406276464
                path: '/dev/gptid/3a08fb45-15f4-11e9-b1b0-000c29f308bf'
                phys_path: 'id1,enc@n50050cc1020371ce/type@0/slot@d/p2'
                whole_disk: 1
                create_txg: 4
                com.delphix:vdev_zap_leaf: 58
            children[1]:
                type: 'disk'
                id: 1
                guid: 16473320972417207545
                path: '/dev/gptid/3ad7a643-15f4-11e9-b1b0-000c29f308bf'
                phys_path: 'id1,enc@n50050cc1020371ce/type@0/slot@e/p2'
                whole_disk: 1
                create_txg: 4
                com.delphix:vdev_zap_leaf: 59
        children[8]:
            type: 'mirror'
            id: 8
            guid: 4746369589200209824
            metaslab_array: 1632
            metaslab_shift: 29
            ashift: 12
            asize: 17175150592
            is_log: 0
            create_txg: 287150
            com.delphix:vdev_zap_top: 1629
            children[0]:
                type: 'disk'
                id: 0
                guid: 3101147006343329796
                path: '/dev/da33p1'
                phys_path: 'id1,enc@n50050cc1020371ce/type@0/slot@13/p1'
                whole_disk: 1
                create_txg: 287150
                com.delphix:vdev_zap_leaf: 1630
            children[1]:
                type: 'disk'
                id: 1
                guid: 9629253759042882279
                path: '/dev/da34p1'
                phys_path: 'id1,enc@n50050cc1020371ce/type@0/slot@7/p1'
                whole_disk: 1
                create_txg: 287150
                com.delphix:vdev_zap_leaf: 1631
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
 

sunshine931

Explorer
Joined
Jan 23, 2018
Messages
54
Update: I've emailed the freebsd-fs mailing list for their insight. I'll update here when I hear back.
 

Andy58

Cadet
Joined
Jul 19, 2019
Messages
1
Hello,

being in a similar situation I ran some tests in a VM.

I set up a pool with 2x2 disks mirrored (mirror-0, mirror-1), generated some data, then added another mirror (mirror-2) to the pool. Added more data and successfully removed mirror-2 from the pool with: zpool remove tank mirror-2

During zpool status you can see ZFS copying over the data from mirror-2 to the remaining mirrors of the pool.

However, while repeating the process by adding back the two drives as a mirror-3 and trying to remove it again, I came across the same error as above:

cannot remove mirror-3: invalid config; all top-level vdevs must have the same sector size and not be raidz.

After a reboot the error did not reoccur and mirror-3was safely removed.


Just sharing my expirience, maybe it is of help to someone. This was FreeNAS 11.2, btw.

Thank you
Andy
 
Top