RAIDZ1 pool with UNAVAIL drive will not import

efreem01

Cadet
Joined
Dec 31, 2022
Messages
3
I'm stuck in a catch-22 situation here. I have a RAIDZ1 pool that was DEGRADED due to a failed hard disk. I shut down the FREENAS server, replaced the failed drive. Then I turned the server back on and now I cannot replace the failed drive without first importing the pool, and I cannot import the pool due to an UNAVAIL device.

This is used as a shared storage system for a cluster of VMWare hypervisors in my homelab. This zpool holds the virtual machine data for the cluster and thus it's a very low-touch system. I'm still running FreeNAS-11.1-U7 and have not upgraded this yet to the newer releases.

Thank you so much for any help you all can offer!

Code:
root@ESXStore:~ # zpool import -f
   pool: ESXStore6TB
     id: 3685523002848744078
  state: DEGRADED
 status: One or more devices are missing from the system.
 action: The pool can be imported despite missing or damaged devices.  The
    fault tolerance of the pool may be compromised if imported.
   see: http://illumos.org/msg/ZFS-8000-2Q
 config:

    ESXStore6TB                                     DEGRADED
      raidz1-0                                      DEGRADED
        gptid/3b3a464e-0e6f-11e8-9b44-2c413890a25d  ONLINE
        12356353552467693578                        UNAVAIL  cannot open
        gptid/ac597167-6cd3-11e8-ba6f-2c413890a25d  ONLINE
    cache
      ada3p3
      ada6p2
    logs
      ada3p2                                        ONLINE

root@ESXStore:~ # zpool replace ESXStore6TB 12356353552467693578
cannot open 'ESXStore6TB': no such pool

root@ESXStore:~ # zpool offline ESXStore6TB 12356353552467693578
cannot open 'ESXStore6TB': no such pool

GOOD DISK 1: ada0p2
GOOD DISK 2: ada2p2
NEW/REPLACED DISK: ada1 (was GPT ID 12356353552467693578)

 

efreem01

Cadet
Joined
Dec 31, 2022
Messages
3
Time heals all ills. I did absolutely nothing. About an hour or so after the box had been running, my ESXStore6TB ZPOOL showed up, albeit in a degraded state and I was able to replace the failed drive from the CLI.

Code:
  pool: ESXStore6TB
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
    continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Sat Dec 31 11:32:49 2022
    12.2G scanned at 16.3M/s, 984K issued at 1.28K/s, 3.79T total
    0 resilvered, 0.00% done, no estimated completion time
config:

    NAME                                              STATE     READ WRITE CKSUM
    ESXStore6TB                                       DEGRADED     0     0     0
      raidz1-0                                        DEGRADED     0     0     0
        gptid/3b3a464e-0e6f-11e8-9b44-2c413890a25d    ONLINE       0     0     0
        replacing-1                                   UNAVAIL      0     0     0
          12356353552467693578                        UNAVAIL      0     0     0  was /dev/gptid/c36ffebf-755e-11e7-884c-2c413890a25d
          gptid/71821e04-8928-11ed-9c6e-f8f21e7c3fd4  ONLINE       0     0     0
        gptid/ac597167-6cd3-11e8-ba6f-2c413890a25d    ONLINE       0     0     0
    logs
      ada3p2                                          ONLINE       0     0     0
    cache
      ada3p3                                          ONLINE       0     0     0
      ada6p2                                          ONLINE       0     0     0

errors: No known data errors

  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:02:36 with 0 errors on Tue Dec 27 03:47:36 2022
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da0p2     ONLINE       0     0     0
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Top