How to fix broken ZFS pool

Status
Not open for further replies.

Jippe

Cadet
Joined
Jul 11, 2016
Messages
5
I had a raidz1-0 ZFS pool with 4 disks. One of the disks broke down, resulting in a degraded pool. After replacing the physical disk with a new one, I tried re-adding the disk to the pool. For this, I used the Volume Manager from the FreeNAS GUI. However, this resulted in the disk being added as a mirror instead of replacing the failed disk in the existing raidz1-0 configuration (yes I know, not the right way, later I found out how to properly do it).

Then I made a couple of stupid mistakes. Thinking since the new disk is mirrored, I could easily take it out of the pool by just wiping it. This wasn't possible because the volume was in use. So I did a zfs export of the pool and wiped the new (mirrored) disk. Afterwards, I tried importing the pool by doing: zfs import -a. But now I get:

cannot import 'Bulldog': one or more devices is currently unavailable

The output of zfs import is:

Code:
   pool: Bulldog
     id: 12805352018018536961
  state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
    devices and try again.
   see: http://illumos.org/msg/ZFS-8000-6X
config:

    Bulldog                                         UNAVAIL  missing device
     raidz1-0                                      DEGRADED
       gptid/d1c7e87c-1747-11e5-9f51-6c626d443f7a  ONLINE
       gptid/47d27bcb-d60f-11e0-a82d-6c626d443f7a  ONLINE
       8048169950587782112                         UNAVAIL  cannot open
       gptid/48d43c65-d60f-11e0-a82d-6c626d443f7a  ONLINE

    Additional devices are known to be part of this pool, though their
    exact configuration cannot be determined.


There are still 3 disks out of the original 4 left, so I really hope my data is safe.

However, how do I get the pool online again?
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
zpool import -f Bulldog

Might work. The f is used to force the import. Cli doesn't mount to the same place as the GUI.
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
I tried re-adding the disk to the pool. For this, I used the Volume Manager from the FreeNAS GUI. However, this resulted in the disk being added as a mirror
A mirror of what? Please post the output of zpool history Bulldog.
how do I get the pool online again?
As far as I know, your only hope is to reinstall the disk you removed. However, since you wiped it, I have a feeling that won't work. Most likely you need to destroy the pool, recreate it, and restore the data from backup.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
I hope what you described didn't actually happen because your data is gone if you did that. What do you mean it got added as a mirror? That isn't really possible and what your zpool import shows doesn't show any mirrors. Try the import -f and backup all your data before you lose it.
 

Jippe

Cadet
Joined
Jul 11, 2016
Messages
5
Thanks all for the replies.

zpool import -f Bulldog
doesn't work. It gives the same error: cannot import 'Bulldog': one or more devices is currently unavailable

zpool history Bulldog doesn't work as well: cannot open 'Bulldog': no such pool. Probably because the pool has been exported?

How can I backup the data when the pool is in this state?
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
My guess, is that rather than a "mirror", it got striped as a single disk into the existing pool.

After he removed the disk and wiped it, he broke his pool.
 

maglin

Patron
Joined
Jun 20, 2015
Messages
299
Ouch. User error. If it was added as a single disc your only fix is to offload the data and rebuild the pool.

Your data is no longer readable because you removed part of the pool that had no redundancy.


Sent from my iPhone using Tapatalk
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
You added that extra disk as a another vdev in your pool. With zfs of you lose a vdev you lose the pool. So when you removed and wiped that vdev made up of that single disk you destroyed your pool. No getting your data back. Good thing this is only a backup box.

Take this opportunity to rebuild and play with replacing drives before you put real data back on it.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Let this be a warning to everyone. If FreeNAS shows you a big red warning, you'd better be damned sure you know what you're doing.
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
My guess, is that rather than a "mirror", it got striped as a single disk into the existing pool.
+1 Agree with this.

That could very well be. Is it fixable? What about my data?
IMHO, I'm hoping you had a backup of your data you can restore from (if it is vital)...
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
The thing is, zpool status doesn't show a missing stripe.

Have you tried mounting the pool read-only? If not, try that, and if it works, do zpool history.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
The thing is, zpool status doesn't show a missing stripe.
But it does say:
"Additional devices are known to be part of this pool, though their exact configuration cannot be determined."
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
True. I'm just uncomfortable telling @Jippe "your data is definitely gone forever" without a more definitive explanation.
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
Out of curiosity, I created a "JankyPool" which has 3x3TB Drives (RaidZ1); then added a single 3TB Drive. This is what is appears like with "zpool status":
Code:
  pool: JankyPool
state: ONLINE
  scan: none requested
config:

        NAME                                            STATE     READ WRITE CKS                                                                              UM
        JankyPool                                       ONLINE       0     0                                                                                   0
          raidz1-0                                      ONLINE       0     0                                                                                   0
            gptid/5653da2b-4904-11e6-bcd5-000c293acb47  ONLINE       0     0                                                                                   0
            gptid/5710a41e-4904-11e6-bcd5-000c293acb47  ONLINE       0     0                                                                                   0
            gptid/57d39fcc-4904-11e6-bcd5-000c293acb47  ONLINE       0     0                                                                                   0
          gptid/941004b6-4904-11e6-bcd5-000c293acb47    ONLINE       0     0  


While hard to notice, the last drive is actually not under "raidz1-0", so OP may have a 4 drive RaidZ1 vDev...

Will try another Pool configuration to see if I can mimic the output another way...
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Out of curiosity, I created a "JankyPool" which has 3x3TB Drives (RaidZ1); then added a single 3TB Drive. This is what is appears like with "zpool status":
What happens if you detach (export) the pool, remove the striped disk, and then try zpool import?
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
For this, I used the Volume Manager from the FreeNAS GUI. However, this resulted in the disk being added as a mirror instead of replacing the failed disk in the existing raidz1-0 configuration (yes I know, not the right way, later I found out how to properly do it).
In my recent test (FreeNAS 9.10); I am unable to add a single drive as a "Mirror" to a Pool. In fact trying to add it as a "stripe" does not appear possible through the normal "Volume Manager" unless you choose "Manual Setup"...

All I got continuously in the normal Volume Manager was this message and it would not proceed:
upload_2016-7-13_10-35-57.png
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
What happens if you detach (export) the pool, remove the striped disk, and then try zpool import?
I can try that, pretty sure that will toast the Pool though since an actual vDev will be missing then. However, it may then appear like the OPs output... Will try it in a couple minutes and report back.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
However, it may then appear like the OPs output...
That's why I suggested it--if you get that output, then we can be pretty sure that's what happened to the OP, and that his data is toast.
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
This is with the striped disk unattached (Drive Tray removed via Hot Swap). I don't get that 4th "UNAVAIL cannot open" listing the OP shows...
Code:
[root@ASC-FN01] ~# zpool import
   pool: JankyPool
     id: 4526970314400275922
  state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
        devices and try again.
   see: http://illumos.org/msg/ZFS-8000-6X
config:

        JankyPool                                       UNAVAIL  missing device
          raidz1-0                                      ONLINE
            gptid/693d1453-490b-11e6-bcd5-000c293acb47  ONLINE
            gptid/69f41fd9-490b-11e6-bcd5-000c293acb47  ONLINE
            gptid/6aafab07-490b-11e6-bcd5-000c293acb47  ONLINE

        Additional devices are known to be part of this pool, though their
        exact configuration cannot be determined.
 
Status
Not open for further replies.
Top