Cannot replace disks in pool

dra6onfire

Dabbler
Joined
Jan 19, 2016
Messages
10
I am currently running TrueNAS core 12.0-u2.1. My Setup has 3 pools and a boot volume. 1 of my pools is a mirrored vdev of 2 small SSDs used for virtualization (we'll call this pool 1 to keep things straight). The other two are larger with each pool containing one raidz1 vdev with 4 spinning disks (we will call these pool 2 and pool 3).

I am currently going through and replacing each drive in the pools with larger drives to increase storage. I started with pool 1. I offlined one drive from the zpool status UI menu, shutdown the server, replaced the offline disk, started the server, and executed the replace operation from the zpool status UI menu. As it always has in the past, it went flawlessly and once the resilvering was completed, I repeated the same for the other disk in pool 1. I moved on to pool 2. Same process on the first disk but when I attempted the replace operation I got a gnarly error that seems to be reporting a disk not found somehow (full trace below). I replaced the original disk and tried replacing another in pool 2 with the exact same result. As a sanity check, I put the original disks for pool 2 back and attempted pool 3. Pool 3, other than disk size, is configured in the same raidz1 config as pool 2 and was provisioned at the same time as pool 2. Pool 3 was able to replace all drives and resilver with absolutely no issues just like pool 1. I went back and re-attempted pool 2 again after completing pool 3 and got the same error.

Code:
[2021/02/28 14:14:49] (ERROR) middlewared.job.run():379 - Job <bound method accepts.<locals>.wrap.<locals>.nf of <middlewared.plugins.pool_.replace_disk.PoolService object at 0x81ce9e130>> failed
concurrent.futures.process._RemoteTraceback:
"""
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/zfs.py", line 277, in replace
    target.replace(newvdev)
  File "libzfs.pyx", line 391, in libzfs.ZFS.__exit__
  File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/zfs.py", line 277, in replace
    target.replace(newvdev)
  File "libzfs.pyx", line 2060, in libzfs.ZFSVdev.replace
libzfs.ZFSException: already in replacing/spare config; wait for completion or use 'zpool detach'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/concurrent/futures/process.py", line 239, in _process_worker
    r = call_item.fn(*call_item.args, **call_item.kwargs)
  File "/usr/local/lib/python3.8/site-packages/middlewared/worker.py", line 91, in main_worker
    res = MIDDLEWARE._run(*call_args)
  File "/usr/local/lib/python3.8/site-packages/middlewared/worker.py", line 45, in _run
    return self._call(name, serviceobj, methodobj, args, job=job)
  File "/usr/local/lib/python3.8/site-packages/middlewared/worker.py", line 39, in _call
    return methodobj(*params)
  File "/usr/local/lib/python3.8/site-packages/middlewared/worker.py", line 39, in _call
    return methodobj(*params)
  File "/usr/local/lib/python3.8/site-packages/middlewared/schema.py", line 977, in nf
    return f(*args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/zfs.py", line 279, in replace
    raise CallError(str(e), e.code)
middlewared.service_exception.CallError: [EZFS_BADTARGET] already in replacing/spare config; wait for completion or use 'zpool detach'
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/middlewared/job.py", line 367, in run
    await self.future
  File "/usr/local/lib/python3.8/site-packages/middlewared/job.py", line 403, in __run_body
    rv = await self.method(*([self] + args))
  File "/usr/local/lib/python3.8/site-packages/middlewared/schema.py", line 973, in nf
    return await f(*args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/pool_/replace_disk.py", line 122, in replace
    raise e
  File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/pool_/replace_disk.py", line 102, in replace
    await self.middleware.call(
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1238, in call
    return await self._call(
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1203, in _call
    return await self._call_worker(name, *prepared_call.args)
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1209, in _call_worker
    return await self.run_in_proc(main_worker, name, args, job)
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1136, in run_in_proc
    return await self.run_in_executor(self.__procpool, method, *args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1110, in run_in_executor
    return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
middlewared.service_exception.CallError: [EZFS_BADTARGET] already in replacing/spare config; wait for completion or use 'zpool detach'


I have no idea how to proceed. I tried attempting the same operations from command line and I see exactly the same error. The disks that failed to be used as replacements in pool 2 worked perfectly for pool 3. Ideally I don't want to have to copy all the data to another storage location, rebuild a new pool with the new disks and then move all the data back. Any help would be fantastic.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702
already in replacing/spare config; wait for completion or use 'zpool detach'
This line gives you a clue...

zpool status -v will give you a look at what's happening
 

dra6onfire

Dabbler
Joined
Jan 19, 2016
Messages
10
I agree that the above seems to suggest that the replace was done automatically and I should see resilvering underway, but no such luck. Autoreplace is off, and zpool status during the failures shows the pool operating in a degraded state with no resilvering reported.
 

dra6onfire

Dabbler
Joined
Jan 19, 2016
Messages
10
It's probably worth noting that I have been using freeNAS for a little over 10 years. I am familiar with the system but I wouldn't consider myself an expert. I know the problem isn't as simple as auto replace executing without my knowledge. I think the error message is a bit of a red herring and the bad target bit is the real issue but that is mostly because I can't find any evidence that the disk is already being replaced. I can even add the new disk as a hot spare to pool so it is definitely not in some unusable state. The pool members in zpool status are listed as ada objects instead of their gtid values. The other pools were as well before upgrading them. I am hoping this is some kind of labeling issue that can be resolved.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702
We need to see zpool status -v (in code tags) to understand what's going on.
 

dra6onfire

Dabbler
Joined
Jan 19, 2016
Messages
10
The pool encountering the error is "data". The 416k resilver is from when I put the original drive back in and onlined it a few days ago.

zpool status -v with original disks online:

Code:
# zpool status -v
  pool: data
 state: ONLINE
  scan: resilvered 416K in 00:00:04 with 0 errors on Sun Feb 28 14:15:57 2021
config:

        NAME                                            STATE     READ WRITE CKSUM
        data                                            ONLINE       0     0     0
          raidz1-0                                      ONLINE       0     0     0
            gptid/be5dcb8a-f91f-11e4-a2e1-1c6f659ce9bc  ONLINE       0     0     0
            ada5p2                                      ONLINE       0     0     0
            ada4p2                                      ONLINE       0     0     0
            ada8p2                                      ONLINE       0     0     0

errors: No known data errors

  pool: freenas-boot
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
        still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(5) for details.
  scan: scrub repaired 0B in 00:02:46 with 0 errors on Sun Feb 28 03:47:46 2021
config:

        NAME          STATE     READ WRITE CKSUM
        freenas-boot  ONLINE       0     0     0
          ada1p2      ONLINE       0     0     0

errors: No known data errors

  pool: media
 state: ONLINE
  scan: resilvered 372K in 00:00:01 with 0 errors on Sun Feb 28 14:10:15 2021
config:

        NAME                                            STATE     READ WRITE CKSUM
        media                                           ONLINE       0     0     0
          raidz1-0                                      ONLINE       0     0     0
            gptid/a6f8b095-78c1-11eb-8b07-1c6f659ce9bc  ONLINE       0     0     0
            gptid/2d4dbf56-7965-11eb-b749-1c6f659ce9bc  ONLINE       0     0     0
            gptid/4ee9b986-7918-11eb-89c8-1c6f659ce9bc  ONLINE       0     0     0
            gptid/a3005ea8-79b8-11eb-be1f-1c6f659ce9bc  ONLINE       0     0     0

errors: No known data errors

  pool: virtual
 state: ONLINE
  scan: scrub repaired 0B in 00:04:19 with 0 errors on Sun Feb 28 05:04:19 2021
config:

        NAME                                            STATE     READ WRITE CKSUM
        virtual                                         ONLINE       0     0     0
          mirror-0                                      ONLINE       0     0     0
            gptid/8bcea8a7-202a-11e9-8278-1c6f659ce9bc  ONLINE       0     0     0
            gptid/8c1a75de-202a-11e9-8278-1c6f659ce9bc  ONLINE       0     0     0

errors: No known data errors


zpool status -v with one disk offline, removed, and new disk in its place BEFORE attempting replace operation:

Code:
# zpool status -v
  pool: data
 state: DEGRADED
status: One or more devices has been taken offline by the administrator.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Online the device using 'zpool online' or replace the device with
        'zpool replace'.
  scan: resilvered 416K in 00:00:04 with 0 errors on Sun Feb 28 14:15:57 2021
config:

        NAME                     STATE     READ WRITE CKSUM
        data                     DEGRADED     0     0     0
          raidz1-0               DEGRADED     0     0     0
            2517893840874148688  OFFLINE      0     0     0  was /dev/gptid/be5dcb8a-f91f-11e4-a2e1-1c6f659ce9bc
            ada5p2               ONLINE       0     0     0
            ada4p2               ONLINE       0     0     0
            ada8p2               ONLINE       0     0     0

errors: No known data errors

  pool: freenas-boot
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
        still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(5) for details.
  scan: scrub repaired 0B in 00:02:46 with 0 errors on Sun Feb 28 03:47:46 2021
config:

        NAME          STATE     READ WRITE CKSUM
        freenas-boot  ONLINE       0     0     0
          ada1p2      ONLINE       0     0     0

errors: No known data errors

  pool: media
 state: ONLINE
  scan: resilvered 372K in 00:00:01 with 0 errors on Sun Feb 28 14:10:15 2021
config:

        NAME                                            STATE     READ WRITE CKSUM
        media                                           ONLINE       0     0     0
          raidz1-0                                      ONLINE       0     0     0
            gptid/a6f8b095-78c1-11eb-8b07-1c6f659ce9bc  ONLINE       0     0     0
            gptid/2d4dbf56-7965-11eb-b749-1c6f659ce9bc  ONLINE       0     0     0
            gptid/4ee9b986-7918-11eb-89c8-1c6f659ce9bc  ONLINE       0     0     0
            gptid/a3005ea8-79b8-11eb-be1f-1c6f659ce9bc  ONLINE       0     0     0

errors: No known data errors

  pool: virtual
 state: ONLINE
  scan: scrub repaired 0B in 00:04:19 with 0 errors on Sun Feb 28 05:04:19 2021
config:

        NAME                                            STATE     READ WRITE CKSUM
        virtual                                         ONLINE       0     0     0
          mirror-0                                      ONLINE       0     0     0
            gptid/8bcea8a7-202a-11e9-8278-1c6f659ce9bc  ONLINE       0     0     0
            gptid/8c1a75de-202a-11e9-8278-1c6f659ce9bc  ONLINE       0     0     0

errors: No known data errors


zpool status -v with one disk offline, removed, and new disk in its place AFTER attempting replace operation (after error is encountered):

Code:
# zpool status -v
  pool: data
 state: DEGRADED
status: One or more devices has been taken offline by the administrator.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Online the device using 'zpool online' or replace the device with
        'zpool replace'.
  scan: resilvered 416K in 00:00:04 with 0 errors on Sun Feb 28 14:15:57 2021
config:

        NAME                     STATE     READ WRITE CKSUM
        data                     DEGRADED     0     0     0
          raidz1-0               DEGRADED     0     0     0
            2517893840874148688  OFFLINE      0     0     0  was /dev/gptid/be5dcb8a-f91f-11e4-a2e1-1c6f659ce9bc
            ada5p2               ONLINE       0     0     0
            ada4p2               ONLINE       0     0     0
            ada8p2               ONLINE       0     0     0

errors: No known data errors

  pool: freenas-boot
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
        still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(5) for details.
  scan: scrub repaired 0B in 00:02:46 with 0 errors on Sun Feb 28 03:47:46 2021
config:

        NAME          STATE     READ WRITE CKSUM
        freenas-boot  ONLINE       0     0     0
          ada1p2      ONLINE       0     0     0

errors: No known data errors

  pool: media
 state: ONLINE
  scan: resilvered 372K in 00:00:01 with 0 errors on Sun Feb 28 14:10:15 2021
config:

        NAME                                            STATE     READ WRITE CKSUM
        media                                           ONLINE       0     0     0
          raidz1-0                                      ONLINE       0     0     0
            gptid/a6f8b095-78c1-11eb-8b07-1c6f659ce9bc  ONLINE       0     0     0
            gptid/2d4dbf56-7965-11eb-b749-1c6f659ce9bc  ONLINE       0     0     0
            gptid/4ee9b986-7918-11eb-89c8-1c6f659ce9bc  ONLINE       0     0     0
            gptid/a3005ea8-79b8-11eb-be1f-1c6f659ce9bc  ONLINE       0     0     0

errors: No known data errors

  pool: virtual
 state: ONLINE
  scan: scrub repaired 0B in 00:04:19 with 0 errors on Sun Feb 28 05:04:19 2021
config:

        NAME                                            STATE     READ WRITE CKSUM
        virtual                                         ONLINE       0     0     0
          mirror-0                                      ONLINE       0     0     0
            gptid/8bcea8a7-202a-11e9-8278-1c6f659ce9bc  ONLINE       0     0     0
            gptid/8c1a75de-202a-11e9-8278-1c6f659ce9bc  ONLINE       0     0     0

errors: No known data errors
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702
FYI, you can do zpool status -v data to just get that pool

So we may need to try directly with the CLI if the GUI won't cooperate.

You will need to identify your replacement disk in the Storage | Disks list as a disk that's able to be wiped.

Let's assume it's /dev/ada3 (and you're really sure it doesn't have needed content on it):

gpart destroy -F /dev/ada3

gpart create -s gpt /dev/ada3

Then we create the swap and data partitions:
gpart add -s 2G -t freebsd-swap /dev/ada3

gpart add -t freebsd-zfs /dev/ada3

Then you look for the ada3p2 raw uuid value from gpart list and use it to extend the pool:
zpool replace <poolname> 2517893840874148688 gptid/<rawuuid of data partition from gpart list>

Then zpool status -v data to make sure it all looks right.
 

dra6onfire

Dabbler
Joined
Jan 19, 2016
Messages
10
FYI, you can do zpool status -v data to just get that pool
I am aware. I just wasn't certain what you were looking for.

No dice:
Code:
# gpart destroy -F /dev/ada3
ada3 destroyed

# gpart create -s gpt /dev/ada3
ada3 created

# gpart add -s 2G -t freebsd-swap /dev/ada3
ada3p1 added

# gpart add -t freebsd-zfs /dev/ada3
ada3p2 added

# gpart list
(truncated)
2. Name: ada3p2
   Mediasize: 7999415697408 (7.3T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   efimedia: HD(2,GPT,5eb87257-7b46-11eb-8735-1c6f659ce9bc,0x400028,0x3a3412a60)
   rawuuid: 5eb87257-7b46-11eb-8735-1c6f659ce9bc
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 7999415697408
   offset: 2147504128
   type: freebsd-zfs
   index: 2
   end: 15628053127
   start: 4194344
 
# zpool replace data 2517893840874148688 gptid/5eb87257-7b46-11eb-8735-1c6f659ce9bc
cannot replace 2517893840874148688 with gptid/5eb87257-7b46-11eb-8735-1c6f659ce9bc: already in replacing/spare config; wait for completion or use 'zpool detach'


Same error as before. No change in the zpool status output for the pool

Code:
# zpool status -v data
  pool: data
 state: DEGRADED
status: One or more devices has been taken offline by the administrator.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Online the device using 'zpool online' or replace the device with
        'zpool replace'.
  scan: resilvered 416K in 00:00:04 with 0 errors on Sun Feb 28 14:15:57 2021
config:

        NAME                     STATE     READ WRITE CKSUM
        data                     DEGRADED     0     0     0
          raidz1-0               DEGRADED     0     0     0
            2517893840874148688  OFFLINE      0     0     0  was /dev/gptid/be5dcb8a-f91f-11e4-a2e1-1c6f659ce9bc
            ada5p2               ONLINE       0     0     0
            ada4p2               ONLINE       0     0     0
            ada8p2               ONLINE       0     0     0

errors: No known data errors
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702
Was it just pure luck that we landed on ada3? (I was just picking a random example that I had used before and was not using any information you had given me).

Maybe there's something in this:

Try checking the ashift of the pool...
zpool get ashift

If it's 0 maybe we can try the command in that article to get it going.
 

dra6onfire

Dabbler
Joined
Jan 19, 2016
Messages
10
It was pretty magical that ada3 just happened to be the correct device after you used it as an example. Pure wizardry.

I think the zpool reporting for the ashift property is lying
Code:
# zpool get ashift
NAME          PROPERTY  VALUE   SOURCE
data          ashift    0       default
freenas-boot  ashift    0       default
media         ashift    0       default
virtual       ashift    0       default


But from zdb I get non-zero values
Code:
# zdb -C -U /data/zfs/zpool.cache | egrep "(^[a-z]+:)|ashift"
data:
            ashift: 9
media:
            ashift: 12
virtual:
            ashift: 12

The solution from that ticket may work but this does throw a wrench in my plans since I need this pool to have an ashift of 12 for the new drives that are going in.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702
I'm not sure you can have additional drives added with a different ashift than the existing ones, so I'm a little stuck if that's a requirement, but it does at least give us a reasonable understanding of the point that's blocking you despite the woefully inaccurate error message.
 

dra6onfire

Dabbler
Joined
Jan 19, 2016
Messages
10
I appreciate the help. Pretty sure I'm going to have to tear down the existing pool and build a new one.
 
Joined
May 29, 2021
Messages
2
Did you manage to resolve this? I’m in almost exactly the same situation and have followed the steps in the proposed solution with equal lack of success.
 
Joined
May 29, 2021
Messages
2
Update: adding -o ashift=9 from the linked article appears to have resolved the problem for me. My pool is now happily resilvering.
 

Dariusz1989

Contributor
Joined
Aug 22, 2017
Messages
185
Update: adding -o ashift=9 from the linked article appears to have resolved the problem for me. My pool is now happily resilvering.
I'm having the exact same issue I think. And I'm losing my mind.

How did you found the exact path/id of the new driver? >
gptid/5eb87257-7b46-11eb-8735-1c6f659ce9bc

And what does the command look like at the end?

zpool replace -o ashift=9 poolName? /dev/disk/by-id/diskIdOldBrokebn? /dev/disk/by-id/diskIdNewOK?

How did you then check the status of resolving/progress?

TIA!
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702
How did you found the exact path/id of the new driver?
Actually that will be the gptid of the partition, not the drive, but if you already have the new one partitioned, use glabel status or gpart list da1 (replacing da1 with the disk name and then looking for the rawuuid of the partition you're going to use)

if you need to partition the disk (again, replacing da1 with the drive you want to use):

Make sure nothing is on the disk
gpart destroy -F /dev/da1
Set the disk partitioning table to gpt
gpart create -s gpt /dev/da1
Create the swap
gpart add -s 2G -t freebsd-swap /dev/da1
and data partitions
gpart add -t freebsd-zfs /dev/da1
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702
How did you then check the status of resolving/progress?
Status of resilvering can be seen with zpool status poolname (replace poolname with the actual name of your pool)
 

Dariusz1989

Contributor
Joined
Aug 22, 2017
Messages
185
Status of resilvering can be seen with zpool status poolname (replace poolname with the actual name of your pool)
Thansk! Yeah I tried, but it does not give % or eta/speed etc. But I can see its moving now in gui. Not eta tho. its at 35%. Its 10tb data only pool of 5x6tb raidz2 so its quite fast I think. Should be done in a jiff!

gpart list da1 seems to bewhat I needed, thanks!

It sorta give me this

Code:
1. Name: da4p1
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   efimedia: HD(1,GPT,e9fb0b1b-0349-11ec-8729-90e2ba0a51a8,0x80,0x400000)
   rawuuid: e9fb0b1b-0349-11ec-8729-90e2ba0a51a8
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2147483648
   offset: 65536
   type: freebsd-swap
   index: 1
   end: 4194431
   start: 128
2. Name: da4p2
   Mediasize: 5999027556352 (5.5T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e2
   efimedia: HD(2,GPT,ea158ffc-0349-11ec-8729-90e2ba0a51a8,0x400080,0x2ba60f408)
   rawuuid: ea158ffc-0349-11ec-8729-90e2ba0a51a8
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 5999027556352
   offset: 2147549184
   type: freebsd-zfs
   index: 2
   end: 11721045127
   start: 4194432


But I don't know which uuid is the one I need. (I already did the replace) but if I were to look at it, I have no idea which id to use.

TIA
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702
It's da4p2 that has 5.5TB, so that will be it.
 

wolfman

Dabbler
Joined
Apr 11, 2018
Messages
13
Thanks for this thread! :smile:

Ran into a similar issue where the GUI could not replace a faulty disk with the error "already in replacing/spare config; wait for completion or use 'zpool detach'".

I have a pool dating back to FreeNAS 9 which was expanded with one additional VDEV (raidz2-4) under TrueNAS 12.0-U2

Code:
# zpool status -v ggmtank01
  pool: ggmtank01
state: DEGRADED
status: One or more devices has been taken offline by the administrator.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Online the device using 'zpool online' or replace the device with
        'zpool replace'.
  scan: scrub repaired 8K in 07:32:07 with 0 errors on Sun Aug 29 07:32:37 2021
config:

        NAME                                            STATE     READ WRITE CKSUM
        ggmtank01                                       DEGRADED     0     0     0
          raidz2-0                                      ONLINE       0     0     0
            gptid/071d138c-9644-11e8-8380-000743400660  ONLINE       0     0     0
            gptid/07d35682-9644-11e8-8380-000743400660  ONLINE       0     0     0
            gptid/ef627048-743e-11eb-8d93-e4434bb19fe0  ONLINE       0     0     0
            gptid/167a10f2-7aa8-11eb-bdc3-e4434bb19fe0  ONLINE       0     0     0
            gptid/e82432e5-8585-11eb-bdc3-e4434bb19fe0  ONLINE       0     0     0
          raidz2-1                                      ONLINE       0     0     0
            gptid/73340a80-5449-11e9-b326-000743400660  ONLINE       0     0     0
            gptid/49297ee3-00c5-11ec-bdc3-e4434bb19fe0  ONLINE       0     0     0
            gptid/625036be-8586-11eb-bdc3-e4434bb19fe0  ONLINE       0     0     0
            gptid/44286cae-7aa9-11eb-bdc3-e4434bb19fe0  ONLINE       0     0     0
            gptid/3ffd3cdc-7440-11eb-8d93-e4434bb19fe0  ONLINE       0     0     0
          raidz2-2                                      ONLINE       0     0     0
            gptid/c071d681-743c-11eb-8d93-e4434bb19fe0  ONLINE       0     0     0
            gptid/0f702ce5-9644-11e8-8380-000743400660  ONLINE       0     0     0
            gptid/d1bdee26-78df-11eb-bdc3-e4434bb19fe0  ONLINE       0     0     0
            gptid/f3bcbd88-7aa9-11eb-bdc3-e4434bb19fe0  ONLINE       0     0     0
            gptid/0826e283-8587-11eb-bdc3-e4434bb19fe0  ONLINE       0     0     0
          raidz2-4                                      DEGRADED     0     0     0
            gptid/8aaceda5-9ea7-11eb-bdc3-e4434bb19fe0  ONLINE       0     0     0
            gptid/8b2fc90b-9ea7-11eb-bdc3-e4434bb19fe0  ONLINE       0     0     0
            gptid/8c2ee1c1-9ea7-11eb-bdc3-e4434bb19fe0  ONLINE       0     0     0
            gptid/8bf70e18-9ea7-11eb-bdc3-e4434bb19fe0  ONLINE       0     0     0
            gptid/8c8ddaa8-9ea7-11eb-bdc3-e4434bb19fe0  OFFLINE      0     0     0
        logs
          mirror-3                                      ONLINE       0     0     0
            gptid/123e1981-9644-11e8-8380-000743400660  ONLINE       0     0     0
            gptid/12b0bdb1-9644-11e8-8380-000743400660  ONLINE       0     0     0
        cache
          gptid/f4918c31-ff0f-11e9-b449-000743400660    ONLINE       0     0     0

errors: No known data errors


zpool reported the following ashift for the Pool:
Code:
# zpool get ashift ggmtank01
NAME       PROPERTY  VALUE   SOURCE
ggmtank01  ashift    0       default


But zdb shows these ashift-Values for the Pool, and it is different for the newly added VDEV (children[4])! I am pretty sure i haven't changed any Default-settings when adding the new VDEV - but that's a couple of months back!
Code:
# zdb -C -U /data/zfs/zpool.cache
(truncated)
ggmtank01:
    vdev_tree:
        children[0]:
            type: 'raidz'
            nparity: 2
            ashift: 12
        children[1]:
            type: 'raidz'
            nparity: 2
            ashift: 12
        children[2]:
            type: 'raidz'
            nparity: 2
            ashift: 12
        children[3]:
            type: 'mirror'
            ashift: 12
        children[4]:
            type: 'raidz'
            nparity: 2
            ashift: 9


Nonetheless - thanks to this thread - i was able to replace the faulted disk with the following statement and the pool is currently resilvering.
Code:
zpool replace -o ashift=9 ggmtank01 gptid/<faulty-rawuuid> gptid/<new-rawuuid>


Not sure, if this is a bug? I gladly will open an issue in the Bugtracker.
Edit: Created an issue https://jira.ixsystems.com/browse/NAS-112093
 
Last edited:
Top