Destroyed pool comes back as UNAVAIL instead of DESTROYED

Status
Not open for further replies.

ChrisH

Dabbler
Joined
Jan 29, 2014
Messages
11
So while testing freenas 9.20 I created some zpools with the command line (may be the source of the problem?) instead of the GUI. I also deleted them from the command line, which mostly worked.

The trouble is that after some amount of time, maybe a scrub, or maybe after a reboot, the admin gets an automatic email about a (false) problem on the server.

Investigation shows:

Code:
[root@server] ~# zpool status
  pool: datapool
state: UNAVAIL
status: One or more devices could not be opened.  There are insufficient
    replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
  see: http://illumos.org/msg/ZFS-8000-3C
  scan: none requested
config:
 
    NAME                      STATE    READ WRITE CKSUM
    datapool                  UNAVAIL      0    0    0
      mirror-0                UNAVAIL      0    0    0
        3592302713242575290  UNAVAIL      0    0    0  was /dev/da2
        2021266809768084791  UNAVAIL      0    0    0  was /dev/da3
      mirror-1                UNAVAIL      0    0    0
        5112586467922550107  UNAVAIL      0    0    0  was /dev/da4
        6419444988605564983  UNAVAIL      0    0    0  was /dev/da5
      mirror-2                UNAVAIL      0    0    0
        8652468216450752575  UNAVAIL      0    0    0  was /dev/da6
        14819342351271755606  UNAVAIL      0    0    0  was /dev/da7
...
 
  pool: lenspool
 state: ONLINE
  scan: scrub repaired 0 in 2h51m with 0 errors on Sun Mar 30 03:51:33 2014
config:
 
NAME                                            STATE     READ WRITE CKSUM
lenspool                                        ONLINE       0     0     0
 raidz2-0                                      ONLINE       0     0     0
   gptid/df4b8941-8d75-11e3-b5ae-90e2ba50d4e4  ONLINE       0     0     0
   gptid/dfaac5ea-8d75-11e3-b5ae-90e2ba50d4e4  ONLINE       0     0     0
   gptid/e00aa183-8d75-11e3-b5ae-90e2ba50d4e4  ONLINE       0     0     0
   gptid/e068f10e-8d75-11e3-b5ae-90e2ba50d4e4  ONLINE       0     0     0
   gptid/e0c7273e-8d75-11e3-b5ae-90e2ba50d4e4  ONLINE       0     0     0
   gptid/e1254957-8d75-11e3-b5ae-90e2ba50d4e4  ONLINE       0     0     0
...


More than once I have destroyed "datapool" with
Code:
zpool destroy
and
Code:
zpool destroy -f
. But it keeps resurrecting. Any ideas? I would like "datapool" to remain destroyed and not cause unwanted warning emails. "lenspool" has production data in it and I'd like to keep it as is.
 

ChrisH

Dabbler
Joined
Jan 29, 2014
Messages
11
Thanks for asking. Nope, I never did figure it out. I don't admin the file server anymore but I'd love to be able to pass on a solution to the current admin.
 
Status
Not open for further replies.
Top