I'm attempting to replace a 2TB drive with a 6TB drive (single disk stripe) on my system running Freenas 9.10. I'm replacing because I inadvertently let the volume reach 96% capacity. The old drive is otherwise healthy and scrubs have always finished with no issues.
I followed the instructions on "Replacing Drives to Grow a ZFS Pool". After the resilvering was successfully completed I rebooted the system but did not remove the old drive. After reboot the pool was still in a degraded state with the new drive ONLINE but the old drive was still not detached from the pool. I shut down the system, removed the old drive and rebooted. The old drive was still attached and UNAVAIL. At this point I attempted to detach the drive on the CLI with
I was able to copy the zpool status before losing access to the shell window:
Any help would be greatly appreciated!
I followed the instructions on "Replacing Drives to Grow a ZFS Pool". After the resilvering was successfully completed I rebooted the system but did not remove the old drive. After reboot the pool was still in a degraded state with the new drive ONLINE but the old drive was still not detached from the pool. I shut down the system, removed the old drive and rebooted. The old drive was still attached and UNAVAIL. At this point I attempted to detach the drive on the CLI with
zpool detach <pool> <device>
. The system hung at that point. I'm not physically present at the server so it would be difficult confirm if there is any current disk activity, but the GUI hasn't responded in ~48 hours. At this point I think I should get some advice before proceeding any further.I was able to copy the zpool status before losing access to the shell window:
Code:
[root@freenas ~]# zpool status pool: STEVEJC-V1 state: DEGRADED scan: scrub repaired 0 in 4h51m with 0 errors on Sun Jul 21 04:51:53 2019 config: NAME STATE READ WRITE CKSUM STEVEJC-V1 DEGRADED 0 0 0 replacing-0 DEGRADED 0 0 0 5384778147682158842 UNAVAIL 0 0 0 was /dev/gptid/5a421be7-79df-11e6-b083-0024e87f9a5e gptid/9acf3fcf-6559-11e9-802e-0024e87f9a5e ONLINE 0 0 0 errors: No known data errors pool: freenas-boot state: ONLINE scan: scrub repaired 0 in 0h0m with 0 errors on Sun Jul 14 03:45:36 2019 config: NAME STATE READ WRITE CKSUM freenas-boot ONLINE 0 0 0 ada0p2 ONLINE 0 0 0 errors: No known data errors
Any help would be greatly appreciated!