[SOLVED] ZFS Pool Import Corrupted data
Hi folks,
I had to import a zpool (5disk RaidZ) created under 0.7 to 8.x because the thumbdrive of the 0.7 system was damaged.
The auto-import of the pool went fine and I was able to start backing up the data from it.
Unfortunately during the backup the system crashed.
After reboot, the zpool was not visible any longer.
I tried to force a re-import of the pool by issuing:
zpool -fF import raidz
Now, zpool status shows something like this:
zpool status
pool: raidz
state: UNAVAIL
status: One or more devices could not be used because the label is missing
or invalid. There are insufficient replicas for the pool to continue
functioning.
action: Destroy and re-create the pool from
a backup source.
see: http://www.sun.com/msg/ZFS-8000-5E
scan: none requested
config:
NAME STATE READ WRITE CKSUM
raidz UNAVAIL 0 0 0 insufficient replicas
raidz1-0 UNAVAIL 0 0 0 insufficient replicas
ada0 FAULTED 0 0 0 corrupted data
ada1 FAULTED 0 0 0 corrupted data
ada2 FAULTED 0 0 0 corrupted data
ada3 FAULTED 0 0 0 corrupted data
ada4 FAULTED 0 0 0 corrupted data
I think that this does not necessarily mean that the pool is really in a unrecoverable state. I think it has something to do with the fact that the drives used to be named: ad4, ad6, ad8 ,ad10 and ad12 in freenas 0.7.
Is it possible to fix this somehow? Or is the pool gone for good?
Any advice or suggestions are welcome.
Regards,
Ice
Btw, I managed to prepare a new 0.7 thumbdrive and tried booting the system with it. Unfortunately it ends in a kernel panic when it gets to the point where it's trying to load zfs. :(
Hi folks,
I had to import a zpool (5disk RaidZ) created under 0.7 to 8.x because the thumbdrive of the 0.7 system was damaged.
The auto-import of the pool went fine and I was able to start backing up the data from it.
Unfortunately during the backup the system crashed.
After reboot, the zpool was not visible any longer.
I tried to force a re-import of the pool by issuing:
zpool -fF import raidz
Now, zpool status shows something like this:
zpool status
pool: raidz
state: UNAVAIL
status: One or more devices could not be used because the label is missing
or invalid. There are insufficient replicas for the pool to continue
functioning.
action: Destroy and re-create the pool from
a backup source.
see: http://www.sun.com/msg/ZFS-8000-5E
scan: none requested
config:
NAME STATE READ WRITE CKSUM
raidz UNAVAIL 0 0 0 insufficient replicas
raidz1-0 UNAVAIL 0 0 0 insufficient replicas
ada0 FAULTED 0 0 0 corrupted data
ada1 FAULTED 0 0 0 corrupted data
ada2 FAULTED 0 0 0 corrupted data
ada3 FAULTED 0 0 0 corrupted data
ada4 FAULTED 0 0 0 corrupted data
I think that this does not necessarily mean that the pool is really in a unrecoverable state. I think it has something to do with the fact that the drives used to be named: ad4, ad6, ad8 ,ad10 and ad12 in freenas 0.7.
Is it possible to fix this somehow? Or is the pool gone for good?
Any advice or suggestions are welcome.
Regards,
Ice
Btw, I managed to prepare a new 0.7 thumbdrive and tried booting the system with it. Unfortunately it ends in a kernel panic when it gets to the point where it's trying to load zfs. :(