Hello,
I had to replace a faulty 3TB drive from a 4 disk pool. I did it per manual using a new 10TB disk and resilvering went fine.
Now my intention was replace all remaining 3TB disks one by one, with 10TB disks. What I did to avoid confusion was power off the box, swap the first disk, power back on, and from the GUI add the new 10TB to the pool to start resilvering.
After a long 5 hours wait I found the pool ended in degraded state:
After that I tried:
reboot and wait for more 5h resilver... Nothing
run zpool clear Volume0001, wait for more 5h resilver and yet I still having the pool degraded.
I also tried:
And apparently there is no "ghost" disk present.
At this point I kindly ask for guru help and I will be more than grateful for that
Thanks in advance!
Best,
VIDJCB
I had to replace a faulty 3TB drive from a 4 disk pool. I did it per manual using a new 10TB disk and resilvering went fine.
Now my intention was replace all remaining 3TB disks one by one, with 10TB disks. What I did to avoid confusion was power off the box, swap the first disk, power back on, and from the GUI add the new 10TB to the pool to start resilvering.
After a long 5 hours wait I found the pool ended in degraded state:
Code:
zpool status -v
pool: Volume001
state: DEGRADED
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: http://illumos.org/msg/ZFS-8000-8A
scan: scrub in progress since Fri Jan 4 16:07:38 2019
3.44T scanned at 3.83G/s, 427G issued at 476M/s, 8.23T total
0 repaired, 5.07% done, 0 days 04:46:51 to go
config:
NAME STATE READ WRITE CKSUM
Volume001 DEGRADED 0 0 49
raidz1-0 DEGRADED 0 0 98
gptid/dbc00018-1029-11e9-a5ca-94f12895b13c ONLINE 0 0 0
gptid/b053e7c9-e8af-11e7-b707-94f12895b13c DEGRADED 0 0 0 too many errors
gptid/b15ed11f-e8af-11e7-b707-94f12895b13c DEGRADED 0 0 0 too many errors
gptid/a9248c05-0604-11e9-9ed5-94f12895b13c DEGRADED 0 0 0 too many errors
errors: Permanent errors have been detected in the following files:
<metadata>:<0x0>
pool: freenas-boot
state: ONLINE
scan: scrub repaired 0 in 0 days 00:00:50 with 0 errors on Fri Jan 5 03:45:50 2018
config:
NAME STATE READ WRITE CKSUM
freenas-boot ONLINE 0 0 0
da0p2 ONLINE 0 0 0
errors: No known data errorsAfter that I tried:
reboot and wait for more 5h resilver... Nothing
run zpool clear Volume0001, wait for more 5h resilver and yet I still having the pool degraded.
I also tried:
Code:
glabel status
Name Status Components
label/efibsd N/A da0p1
gptid/27cf8ad3-e8ad-11e7-aa15-94f12895b13c N/A da0p1
gptid/b053e7c9-e8af-11e7-b707-94f12895b13c N/A ada0p2
gptid/b15ed11f-e8af-11e7-b707-94f12895b13c N/A ada1p2
gptid/a9248c05-0604-11e9-9ed5-94f12895b13c N/A ada2p2
gptid/dbc00018-1029-11e9-a5ca-94f12895b13c N/A ada3p2And apparently there is no "ghost" disk present.
At this point I kindly ask for guru help and I will be more than grateful for that
Thanks in advance!
Best,
VIDJCB
Last edited: