Replaced Disk Duplicates / Not Detaching or Removing (have been researched a bunch of threads).

Jimbob

Dabbler
Joined
Jun 8, 2023
Messages
19
Looks like one of your hot-spares kicked in and after replacement didn't go back to being a spare (this is a bug that was only recently fixed in U5.

Have you tried zpool detach Barrel gptid/d3683bab-de07-11eb-ab6d-a8a15938b457
Ok JFP.. that worked to a degree.. looks kind of closer to where it should be.. there is a weird SPARE duplicat going on.. da7 should be the 1x hot spare (in red).. instead there are 2x spares online which I didn't set up (da6 / da7) & the pool is non degraded.. which just sort of appeared post drive replacement.. there should be 10x drives in the pool with 1x hot spare..

Does this matter? My OCD is driving me a bit nuts.. as my gut is saying if it looks / feels wrong it probs is.. LMK.. thank you!

1688355360089.png
 
Joined
Jul 3, 2015
Messages
926
Can you run zpool status from the CLI and show the output please.
 

Jimbob

Dabbler
Joined
Jun 8, 2023
Messages
19
Code:
e the TrueNAS WebUI and API exclusively. ALL OTHERS ARE
NOT SUPPORTED AND WILL RESULT IN UNDEFINED BEHAVIOR AND MAY
RESULT IN SYSTEM FAILURE.

root@truenas[~]# zpool status
  pool: Barrel
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Mon Jul  3 11:03:10 2023
        16.4T scanned at 1.83G/s, 11.0T issued at 1.23G/s, 35.8T total
        1.05T resilvered, 30.69% done, 05:44:49 to go
config:

        NAME                                              STATE     READ WRITE CKSUM
        Barrel                                            ONLINE       0     0   0
          raidz2-0                                        ONLINE       0     0   0
            gptid/16ea0816-dd57-11eb-ab6d-a8a15938b457    ONLINE       0     0   0
            gptid/16a7f8e2-dd57-11eb-ab6d-a8a15938b457    ONLINE       0     0   0
            gptid/16c3758f-dd57-11eb-ab6d-a8a15938b457    ONLINE       0     0   0
            gptid/16cf24a0-dd57-11eb-ab6d-a8a15938b457    ONLINE       0     0   0
            spare-4                                       ONLINE       0     0   0
              gptid/a946fb3c-05d6-11ee-8f4a-a8a15938b457  ONLINE       0     0   0
              gptid/5592df15-05d9-11ee-8f4a-a8a15938b457  ONLINE       0     0   0  (resilvering)
            gptid/1863c5af-dd57-11eb-ab6d-a8a15938b457    ONLINE       0     0   0
            gptid/1842b18e-dd57-11eb-ab6d-a8a15938b457    ONLINE       0     0   0
            gptid/18579aaa-dd57-11eb-ab6d-a8a15938b457    ONLINE       0     0   0
            gptid/18246fdb-dd57-11eb-ab6d-a8a15938b457    ONLINE       0     0   0
            gptid/18821fbd-dd57-11eb-ab6d-a8a15938b457    ONLINE       0     0   0
        spares
          gptid/5592df15-05d9-11ee-8f4a-a8a15938b457      INUSE     currently in use

errors: No known data errors

  pool: boot-pool
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 00:00:08 with 0 errors on Fri Jun 30 03:45:08 2023
config:

        NAME        STATE     READ WRITE CKSUM
        boot-pool   ONLINE       0     0     0
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Your da7 has been consumed by the pool as a replacement and is currently resilvering. It is therefore no longer available as a spare. Wait for the resilvering to complete and then let's see what happens. It looks like ZFS is entirely satisfied with the state of everything.
 

Jimbob

Dabbler
Joined
Jun 8, 2023
Messages
19
Oh... ok..what I thought.. looking at an 11x drive pool with no hot swap.. will hang till it's done.. hopefully can reconcile it back to a 10x disk pool with 1x hot swapsie post silvering.. thank you chaps..
 

Jimbob

Dabbler
Joined
Jun 8, 2023
Messages
19
Your da7 has been consumed by the pool as a replacement and is currently resilvering. It is therefore no longer available as a spare. Wait for the resilvering to complete and then let's see what happens. It looks like ZFS is entirely satisfied with the state of everything.
Lols btw.. it sounds a bit like the 'pool' is becoming self aware, sentient & is consuming resources around it as it grows stronger! Will keep you posted.. danke!
 

Jimbob

Dabbler
Joined
Jun 8, 2023
Messages
19
Please do report back with what happens.
Resilvering has finished. Zpool status. Below.. TIA JGR. Looks like da7 is still in a 'schrodinger' like state in dual 'SPARE' instances.

Code:
Warning: the supported mechanisms for making configuration changes
are the TrueNAS WebUI and API exclusively. ALL OTHERS ARE
NOT SUPPORTED AND WILL RESULT IN UNDEFINED BEHAVIOR AND MAY
RESULT IN SYSTEM FAILURE.

root@truenas[~]# zpool status
  pool: Barrel
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: resilvered 3.41T in 08:54:42 with 0 errors on Tue Jul  4 09:41:17 2023
config:

        NAME                                              STATE     READ WRITE CKSUM
        Barrel                                            ONLINE       0     0   0
          raidz2-0                                        ONLINE       0     0   0
            gptid/16ea0816-dd57-11eb-ab6d-a8a15938b457    ONLINE       0     0   0
            gptid/16a7f8e2-dd57-11eb-ab6d-a8a15938b457    ONLINE       0     0   0
            gptid/16c3758f-dd57-11eb-ab6d-a8a15938b457    ONLINE       0     0   0
            gptid/16cf24a0-dd57-11eb-ab6d-a8a15938b457    ONLINE       0     0   0
            spare-4                                       ONLINE       0     0   0
              gptid/a946fb3c-05d6-11ee-8f4a-a8a15938b457  ONLINE       0     0   0
              gptid/5592df15-05d9-11ee-8f4a-a8a15938b457  ONLINE       0     0   0
            gptid/1863c5af-dd57-11eb-ab6d-a8a15938b457    ONLINE       0     0   0
            gptid/1842b18e-dd57-11eb-ab6d-a8a15938b457    ONLINE       0     0   0
            gptid/18579aaa-dd57-11eb-ab6d-a8a15938b457    ONLINE       0     0   0
            gptid/18246fdb-dd57-11eb-ab6d-a8a15938b457    ONLINE       0     0   0
            gptid/18821fbd-dd57-11eb-ab6d-a8a15938b457    ONLINE       0     0   0
        spares
          gptid/5592df15-05d9-11ee-8f4a-a8a15938b457      INUSE     currently in use

errors: No known data errors
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I believe what's needed now is

# zpool detach Barrel gptid/5592df15-05d9-11ee-8f4a-a8a15938b457
 

Jimbob

Dabbler
Joined
Jun 8, 2023
Messages
19
Will give that a whirl.. as a NOOB.. will that remove the duplicate SPARE issue & return da7 as the SPARE? OR we just run it & find out what 'Skynet' chooses to do?
 

Jimbob

Dabbler
Joined
Jun 8, 2023
Messages
19
I believe what's needed now is

# zpool detach Barrel gptid/5592df15-05d9-11ee-8f4a-a8a15938b457
Ok.. you Sir.. nailed it.. thank you for the help / patience all.. BTW the good thing is that there is now a recent / well document thread here to help any future people with this issue (myself included).

That did resolve the duplicate spare issue / conflict & return the 'pool' back to 'ONLINE'.

CASE CLOSED.

:smile::smile::smile:

1688440522701.png
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
"Two lemons and a yellow noodle? Them Aussies are a bit weird." ;-)

Happy to help, enjoy your NAS and be pleasantly comfortable in the knowledge that you now know how sparing works.
 
Joined
Jul 3, 2015
Messages
926
The issue of hot-spares not returning to hot-spare after activation was apparently fixed in U5 so perhaps update and you may not need to do this step next time.
 

Jimbob

Dabbler
Joined
Jun 8, 2023
Messages
19
B
The issue of hot-spares not returning to hot-spare after activation was apparently fixed in U5 so perhaps update and you may not need to do this step next time.
Brill. Ta JFP. Will take a look, I did see a couple update alerts pop up.
 

isopropyl

Contributor
Joined
Jan 29, 2022
Messages
159
This might be relevant to an issue I am currently experiencing.
I cannot detatch my spare, the pool is healthy.
Also not sure what originally happened. I had replaced a failed drive, it resilvered the new drive, then I woke up and the new drive showed as degraded. So spare is active still, 1 working drive in that vdev, and one failed drive.

I left it and later had to take out the server to do something unrelated, so I removed all the HDD's because of weight. After I put them all back in (same exact spots) it came online and everything showed healthy, all drives looked good. I figured maybe it was just seated in poorly. Ran a scrub and all looked good still.
But that spare never detatched like it usually auto-detatches. And if I hit detatch in GUI for that spare in the vdev, it gives an error:
Code:
[EZFS_NOTSUP] Cannot detach root-level vdevs

Error: concurrent.futures.process._RemoteTraceback:
"""
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 232, in __zfs_vdev_operation
    op(target, *args)
  File "libzfs.pyx", line 402, in libzfs.ZFS.__exit__
  File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 232, in __zfs_vdev_operation
    op(target, *args)
  File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 241, in <lambda>
    self.__zfs_vdev_operation(name, label, lambda target: target.detach())
  File "libzfs.pyx", line 2158, in libzfs.ZFSVdev.detach
libzfs.ZFSException: Cannot detach root-level vdevs

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.9/concurrent/futures/process.py", line 246, in _process_worker
    r = call_item.fn(*call_item.args, **call_item.kwargs)
  File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 111, in main_worker
    res = MIDDLEWARE._run(*call_args)
  File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 45, in _run
    return self._call(name, serviceobj, methodobj, args, job=job)
  File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 39, in _call
    return methodobj(*params)
  File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 39, in _call
    return methodobj(*params)
  File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 985, in nf
    return f(*args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 241, in detach
    self.__zfs_vdev_operation(name, label, lambda target: target.detach())
  File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 234, in __zfs_vdev_operation
    raise CallError(str(e), e.code)
middlewared.service_exception.CallError: [EZFS_NOTSUP] Cannot detach root-level vdevs
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 139, in call_method
    result = await self.middleware._call(message['method'], serviceobj, methodobj, params, app=self)
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1236, in _call
    return await methodobj(*prepared_call.args)
  File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 981, in nf
    return await f(*args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/pool.py", line 1103, in detach
    await self.middleware.call('zfs.pool.detach', pool['name'], found[1]['guid'])
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1279, in call
    return await self._call(
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1244, in _call
    return await self._call_worker(name, *prepared_call.args)
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1250, in _call_worker
    return await self.run_in_proc(main_worker, name, args, job)
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1169, in run_in_proc
    return await self.run_in_executor(self.__procpool, method, *args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1152, in run_in_executor
    return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
middlewared.service_exception.CallError: [EZFS_NOTSUP] Cannot detach root-level vdevs



Code:
# zpool status -v
  pool: PrimaryPool
 state: ONLINE
  scan: resilvered 54.7M in 00:00:11 with 0 errors on Sun Aug 27 18:13:43 2023
config:

        NAME                                              STATE     READ WRITE CKSUM
        PrimaryPool                                       ONLINE       0     0   0
          mirror-0                                        ONLINE       0     0   0
            gptid/d7476d46-32ca-11ec-b815-002590f52cc2    ONLINE       0     0   0
            gptid/d8d6aa36-32ca-11ec-b815-002590f52cc2    ONLINE       0     0   0
          mirror-1                                        ONLINE       0     0   0
            gptid/d9a6f5dc-32ca-11ec-b815-002590f52cc2    ONLINE       0     0   0
            gptid/db71bcb5-32ca-11ec-b815-002590f52cc2    ONLINE       0     0   0
          mirror-2                                        ONLINE       0     0   0
            gptid/d8b2f42f-32ca-11ec-b815-002590f52cc2    ONLINE       0     0   0
            gptid/d96847a9-32ca-11ec-b815-002590f52cc2    ONLINE       0     0   0
          mirror-3                                        ONLINE       0     0   0
            gptid/d9fb7757-32ca-11ec-b815-002590f52cc2    ONLINE       0     0   0
            gptid/da1e1121-32ca-11ec-b815-002590f52cc2    ONLINE       0     0   0
          mirror-4                                        ONLINE       0     0   0
            gptid/9fd0872d-8f64-11ec-8462-002590f52cc2    ONLINE       0     0   0
            gptid/9ff0f041-8f64-11ec-8462-002590f52cc2    ONLINE       0     0   0
          mirror-5                                        ONLINE       0     0   0
            gptid/14811777-1b6d-11ed-8423-ac1f6be66d76    ONLINE       0     0   0
            gptid/0cd1e905-3c2e-11ee-96af-ac1f6be66d76    ONLINE       0     0   0
          mirror-6                                        ONLINE       0     0   0
            gptid/749a1891-1b5c-11ee-941f-ac1f6be66d76    ONLINE       0     0   0
            spare-1                                       ONLINE       0     0   0
              gptid/c774316e-3c2c-11ee-96af-ac1f6be66d76  ONLINE       0     0   0
              gptid/0d48d4ab-1e91-11ed-a6aa-ac1f6be66d76  ONLINE       0     0   0
        spares
          gptid/0d48d4ab-1e91-11ed-a6aa-ac1f6be66d76      INUSE     currently in use
          gptid/0d56b97d-1e91-11ed-a6aa-ac1f6be66d76      AVAIL

errors: No known data errors

  pool: boot-pool
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 00:01:06 w
ith 0 errors on Sat Aug 26 03:46:06 2023
config:

librewolf_xHGQOQbVjm.png
 
Last edited:
Top