Hi all, I am trying to fix an issue I have on truenas storage happened after replacing a failed disk, currently the pool is in degraded status as per the below pool status output:
pool: nas1
state: DEGRADED
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P
scan: scrub repaired 0B in 01:06:00 with 0 errors on Sun Oct 17 10:50:10 2021
config:
NAME STATE READ WRITE CKSUM
nas1 DEGRADED 0 0 0
raidz2-0 ONLINE 0 0 0
gptid/8ef4519d-b17c-11eb-b7e2-d4ae528dbb7e ONLINE 0 0 0
gptid/06732ffa-a69a-11eb-83d4-d4ae528dbb7e ONLINE 0 0 0
gptid/169562f6-9f8d-11eb-9c8a-d4ae528dbb7e ONLINE 0 0 0
gptid/16155c26-9f8d-11eb-9c8a-d4ae528dbb7e ONLINE 0 0 0
gptid/16dce22b-9f8d-11eb-9c8a-d4ae528dbb7e ONLINE 0 0 0
raidz2-1 ONLINE 0 0 0
gptid/54aedc98-b260-11eb-b7e2-d4ae528dbb7e ONLINE 0 0 0
gptid/57963c77-b260-11eb-b7e2-d4ae528dbb7e ONLINE 0 0 0
gptid/56c142f0-b260-11eb-b7e2-d4ae528dbb7e ONLINE 0 0 0
gptid/5b99c3da-b260-11eb-b7e2-d4ae528dbb7e ONLINE 0 0 0
gptid/5cee9c1d-b260-11eb-b7e2-d4ae528dbb7e ONLINE 0 0 0
gptid/5dea0e48-b260-11eb-b7e2-d4ae528dbb7e ONLINE 0 0 0
da1p2 DEGRADED 0 0 0 too many errors
errors: No known data errors
da1p2 was the replaced disk previously part of raidz2-0 vdev, now it's listed as separate vdev in degraded status.
I tried detaching that disk however I got error
cannot detach da1p2: only applicable to mirror and replacing vdevs
so I am unable to put the pool back online.
Any hints to fix the issue ? Thanks in advance for any feedback, much appreciated
pool: nas1
state: DEGRADED
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P
scan: scrub repaired 0B in 01:06:00 with 0 errors on Sun Oct 17 10:50:10 2021
config:
NAME STATE READ WRITE CKSUM
nas1 DEGRADED 0 0 0
raidz2-0 ONLINE 0 0 0
gptid/8ef4519d-b17c-11eb-b7e2-d4ae528dbb7e ONLINE 0 0 0
gptid/06732ffa-a69a-11eb-83d4-d4ae528dbb7e ONLINE 0 0 0
gptid/169562f6-9f8d-11eb-9c8a-d4ae528dbb7e ONLINE 0 0 0
gptid/16155c26-9f8d-11eb-9c8a-d4ae528dbb7e ONLINE 0 0 0
gptid/16dce22b-9f8d-11eb-9c8a-d4ae528dbb7e ONLINE 0 0 0
raidz2-1 ONLINE 0 0 0
gptid/54aedc98-b260-11eb-b7e2-d4ae528dbb7e ONLINE 0 0 0
gptid/57963c77-b260-11eb-b7e2-d4ae528dbb7e ONLINE 0 0 0
gptid/56c142f0-b260-11eb-b7e2-d4ae528dbb7e ONLINE 0 0 0
gptid/5b99c3da-b260-11eb-b7e2-d4ae528dbb7e ONLINE 0 0 0
gptid/5cee9c1d-b260-11eb-b7e2-d4ae528dbb7e ONLINE 0 0 0
gptid/5dea0e48-b260-11eb-b7e2-d4ae528dbb7e ONLINE 0 0 0
da1p2 DEGRADED 0 0 0 too many errors
errors: No known data errors
da1p2 was the replaced disk previously part of raidz2-0 vdev, now it's listed as separate vdev in degraded status.
I tried detaching that disk however I got error
cannot detach da1p2: only applicable to mirror and replacing vdevs
so I am unable to put the pool back online.
Any hints to fix the issue ? Thanks in advance for any feedback, much appreciated