ZFS problems after a drive change

Status
Not open for further replies.

tlongarms

Cadet
Joined
Feb 3, 2012
Messages
1
Ok So I had a failed drive in a zfspool of 6 drives (raidz).

So I did the following:

- Shut down and removed faulty drive, replacing with new drive of the same type and size
- used the replace option which started re silvering and finished.

Now I am left in a degraded state and unable to remove the rubbish drive that is no longer connected. I am unable to connect to the pool and my data appears to have disappeared. Now in fairness when I was having issues the data appeared to go missing but a scrub restored all data. Now I do not see any of the file structure after the pool name. Can anybody help as at this stage I fear all is lost but would like to know if anyone has any ideas. I have posted a zpool status below however if anyone needs further info please ask.

Thanks in advance for any help.

Code:
[terry@freenas] /dev/gptid# zpool status -v
  pool: zfspool
 state: DEGRADED
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: resilver in progress for 0h54m, 11.81% done, 6h44m to go
config:

        NAME                                            STATE     READ WRITE CKSUM
        zfspool                                         DEGRADED     0     0    70
          raidz1                                        DEGRADED     0     0   280
            gptid/0b9eb323-f5c7-11e0-bc12-20cf30c7f7be  ONLINE       0     0     0
            gptid/0c4cd8cf-f5c7-11e0-bc12-20cf30c7f7be  ONLINE       0     0     0  180K resilvered
            gptid/0cfc3c16-f5c7-11e0-bc12-20cf30c7f7be  ONLINE       0     0     0
            gptid/0dab05ca-f5c7-11e0-bc12-20cf30c7f7be  ONLINE       0     0     0
            gptid/0e190e9a-f5c7-11e0-bc12-20cf30c7f7be  ONLINE       0     0     0  1.09M resilvered
            replacing                                   DEGRADED     0     0     0
              12964504035344809364                      UNAVAIL      0     0     0  was /dev/gptid/0ec7d412-f5c7-11e0-bc12-20cf30c7f7be
              ada5                                      ONLINE       0     0     0  79.2G resilvered

errors: Permanent errors have been detected in the following files:

        zfspool:<0x0>
        /mnt/zfspool/
        zfspool:<0x43>
 
Status
Not open for further replies.
Top