SOLVED ZFS keeps removing my drive after resilvering

Status
Not open for further replies.

Pie

Dabbler
Joined
Jan 19, 2013
Messages
38
Hi,

One of my drives started reporting bad sectors so I RMA'd it and replaced it with a new one. I put the new drive into the system (along side the bad one) and resilvered it, then I removed the bad one and did a replace from the web gui. When it finished resilvering it went into a removed state. Rebooting the server brings it back online and then it starts resilvering again. I've done this twice now and I have no idea why it keeps removing the drive.
Code:
[root@freenas] ~# zpool status
  pool: Root
state: DEGRADED
status: One or more devices has been removed by the administrator.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Online the device using 'zpool online' or replace the device with
        'zpool replace'.
  scan: resilvered 168G in 6h26m with 0 errors on Sat Nov 15 02:56:10 2014
config:

        NAME                                            STATE     READ WRITE CKSUM
        Root                                            DEGRADED     0     0     0
          raidz1-0                                      DEGRADED     0     0     0
            5508971877123744593                         REMOVED      0     0     0  was /dev/gptid/4ebb4921-6b95-11e4-afcf-08606e69c5e2
            gptid/b19b1a9f-6cf6-11e2-b19a-08606e69c5e2  ONLINE       0     0     0
            gptid/b284e7f4-6cf6-11e2-b19a-08606e69c5e2  ONLINE       0     0     0

errors: No known data errors


Here's my history:
Code:
2014-11-12.12:03:24 [txg:11178004] open pool version 28; software version 5000/5; uts  9.2-RELEASE-p12 902502 amd64 [on ]
2014-11-12.12:03:26 [txg:11178006] import pool version 28; software version 5000/5; uts  9.2-RELEASE-p12 902502 amd64 [on ]
2014-11-12.12:03:32 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 13641662058019275819 [user 0 (root) on freenas.local]
2014-11-12.12:03:32 zpool set cachefile=/data/zfs/zpool.cache Root [user 0 (root) on freenas.local]
2014-11-12.12:16:25 [txg:11178071] open pool version 28; software version 5000/5; uts  9.2-RELEASE-p12 902502 amd64 [on ]
2014-11-12.12:16:25 [txg:11178073] import pool version 28; software version 5000/5; uts  9.2-RELEASE-p12 902502 amd64 [on ]
2014-11-12.12:16:30 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 13641662058019275819 [user 0 (root) on freenas.local]
2014-11-12.12:16:30 zpool set cachefile=/data/zfs/zpool.cache Root [user 0 (root) on freenas.local]
2014-11-13.16:26:10 [txg:11194435] open pool version 28; software version 5000/5; uts  9.2-RELEASE-p12 902502 amd64 [on ]
2014-11-13.16:26:10 [txg:11194437] import pool version 28; software version 5000/5; uts  9.2-RELEASE-p12 902502 amd64 [on ]
2014-11-13.16:26:15 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 13641662058019275819 [user 0 (root) on freenas.local]
2014-11-13.16:26:15 zpool set cachefile=/data/zfs/zpool.cache Root [user 0 (root) on freenas.local]
2014-11-13.16:32:44 [txg:11194466] scan setup func=2 mintxg=3 maxtxg=11194466 [on freenas.local]
2014-11-13.16:32:55 [txg:11194468] vdev attach replace vdev=/dev/gptid/4ebb4921-6b95-11e4-afcf-08606e69c5e2 for vdev=/dev/gptid/b0adc2bd-6cf6-11e2-b19a-08606e69c5e2 [on freenas.local]
2014-11-13.16:32:55 zpool replace Root gptid/b0adc2bd-6cf6-11e2-b19a-08606e69c5e2 gptid/4ebb4921-6b95-11e4-afcf-08606e69c5e2 [user 0 (root) on freenas.local]
2014-11-14.00:06:01 [txg:11197299] scan done complete=1 [on freenas.local]
2014-11-14.09:06:26 [txg:11202939] open pool version 28; software version 5000/5; uts  9.2-RELEASE-p12 902502 amd64 [on ]
2014-11-14.09:06:27 [txg:11202941] import pool version 28; software version 5000/5; uts  9.2-RELEASE-p12 902502 amd64 [on ]
2014-11-14.09:06:32 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 13641662058019275819 [user 0 (root) on freenas.local]
2014-11-14.09:06:32 zpool set cachefile=/data/zfs/zpool.cache Root [user 0 (root) on freenas.local]
2014-11-14.09:06:32 [txg:11202943] scan setup func=2 mintxg=3 maxtxg=11202938 [on freenas.local]
2014-11-14.09:26:01 [txg:11203070] scan done complete=0 [on freenas.local]
2014-11-14.09:26:01 [txg:11203070] scan setup func=2 mintxg=3 maxtxg=11202938 [on freenas.local]
2014-11-14.09:29:20 zpool offline Root 18446742974299011780 [user 0 (root) on freenas.local]
2014-11-14.09:29:23 [txg:11203075] scan done complete=0 [on freenas.local]
2014-11-14.09:29:23 [txg:11203075] scan setup func=2 mintxg=3 maxtxg=11202938 [on freenas.local]
2014-11-14.09:29:30 [txg:11203076] detach vdev=/dev/gptid/b0adc2bd-6cf6-11e2-b19a-08606e69c5e2 [on freenas.local]
2014-11-14.09:29:30 zpool detach Root 18446742974299011780 [user 0 (root) on freenas.local]
2014-11-14.16:26:53 [txg:11205898] scan done complete=1 [on freenas.local]
2014-11-14.20:24:15 zpool online Root 5508971877123744593 [user 0 (root) on freenas.local]
2014-11-14.20:29:06 [txg:11208392] open pool version 28; software version 5000/5; uts  9.2-RELEASE-p12 902502 amd64 [on ]
2014-11-14.20:29:06 [txg:11208394] import pool version 28; software version 5000/5; uts  9.2-RELEASE-p12 902502 amd64 [on ]
2014-11-14.20:29:11 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 13641662058019275819 [user 0 (root) on freenas.local]
2014-11-14.20:29:11 zpool set cachefile=/data/zfs/zpool.cache Root [user 0 (root) on freenas.local]
2014-11-14.20:29:11 [txg:11208396] scan setup func=2 mintxg=3 maxtxg=11208391 [on freenas.local]
2014-11-15.02:56:10 [txg:11211213] scan done complete=1 [on freenas.local]


I know I'm not supposed to run anything from the command line, but should I try to run "zpool online" like it recommends?

Thanks,
Peter
 

Pie

Dabbler
Joined
Jan 19, 2013
Messages
38
Here's how it looks after a reboot:
Code:
[root@freenas] ~# zpool status
  pool: Root
state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Sat Nov 15 09:30:59 2014
        14.8G scanned out of 4.38T at 103M/s, 12h19m to go
        4.94G resilvered, 0.33% done
config:

        NAME                                            STATE     READ WRITE CKSUM
        Root                                            ONLINE       0     0     0
          raidz1-0                                      ONLINE       0     0     0
            gptid/4ebb4921-6b95-11e4-afcf-08606e69c5e2  ONLINE       0     0     2  (resilvering)
            gptid/b19b1a9f-6cf6-11e2-b19a-08606e69c5e2  ONLINE       0     0     0
            gptid/b284e7f4-6cf6-11e2-b19a-08606e69c5e2  ONLINE       0     0     0

errors: No known data errors
 

Pie

Dabbler
Joined
Jan 19, 2013
Messages
38
Hmmm, it just removed itself again before resilvering was completed. Are there any logs or anything where I can see what's going on?


Code:
[root@freenas] ~# zpool status
  pool: Root
state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Sat Nov 15 09:30:59 2014
        3.89T scanned out of 4.38T at 202M/s, 0h42m to go
        168G resilvered, 88.75% done
config:

        NAME                                            STATE     READ WRITE CKSUM
        Root                                            DEGRADED     0     0     0
          raidz1-0                                      DEGRADED     0     0     0
            5508971877123744593                         REMOVED      0     0     0  was /dev/gptid/4ebb4921-6b95-11e4-afcf-08606e69c5e2  (resilvering)
            gptid/b19b1a9f-6cf6-11e2-b19a-08606e69c5e2  ONLINE       0     0     0
            gptid/b284e7f4-6cf6-11e2-b19a-08606e69c5e2  ONLINE       0     0     0

errors: No known data errors
 

Pie

Dabbler
Joined
Jan 19, 2013
Messages
38
Running "zpool online" doesn't seem to help either.
Code:
[root@freenas] ~# zpool status
  pool: Root
state: DEGRADED
status: One or more devices has been removed by the administrator.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Online the device using 'zpool online' or replace the device with
        'zpool replace'.
  scan: resilvered 168G in 6h19m with 0 errors on Sat Nov 15 15:50:44 2014
config:

        NAME                                            STATE     READ WRITE CKSUM
        Root                                            DEGRADED     0     0     0
          raidz1-0                                      DEGRADED     0     0     0
            5508971877123744593                         REMOVED      0     0     0  was /dev/gptid/4ebb4921-6b95-11e4-afcf-08606e69c5e2
            gptid/b19b1a9f-6cf6-11e2-b19a-08606e69c5e2  ONLINE       0     0     0
            gptid/b284e7f4-6cf6-11e2-b19a-08606e69c5e2  ONLINE       0     0     0

errors: No known data errors

[root@freenas] ~# zpool online Root 5508971877123744593
warning: device '5508971877123744593' onlined, but remains in faulted state
use 'zpool replace' to replace devices that are no longer present

[root@freenas] ~# zpool status
  pool: Root
state: DEGRADED
status: One or more devices has been removed by the administrator.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Online the device using 'zpool online' or replace the device with
        'zpool replace'.
  scan: resilvered 168G in 6h19m with 0 errors on Sat Nov 15 15:50:44 2014
config:

        NAME                                            STATE     READ WRITE CKSUM
        Root                                            DEGRADED     0     0     0
          raidz1-0                                      DEGRADED     0     0     0
            5508971877123744593                         REMOVED      0     0     0  was /dev/gptid/4ebb4921-6b95-11e4-afcf-08606e69c5e2
            gptid/b19b1a9f-6cf6-11e2-b19a-08606e69c5e2  ONLINE       0     0     0
            gptid/b284e7f4-6cf6-11e2-b19a-08606e69c5e2  ONLINE       0     0     0

errors: No known data errors
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
It will take the same steps. Offline the disk in the GUI. Wipe it. Quick might work, but if it tries to resilver automatically, you'll have to zero it out. Use the now clean disk to replace via the GUI. Profit???

The initial procedure was what hooped ya. Scary doing multiple resilvers on a Z1, good luck.
 

Pie

Dabbler
Joined
Jan 19, 2013
Messages
38
The initial procedure was what hooped ya. Scary doing multiple resilvers on a Z1, good luck.

I fell victim to the classic "Why would I need Z2? That costs too much money!" :(

Fortunately I don't *really* need any of the data on there, if I lost it then it wouldn't be the end of the world. One day I'll rebuild with 4TB drives and I'll definitely use Z2.


It will take the same steps. Offline the disk in the GUI. Wipe it. Quick might work, but if it tries to resilver automatically, you'll have to zero it out. Use the now clean disk to replace via the GUI. Profit???

There are definitely a lot of similarties there, i'll try wiping the drive that's getting confused and see if that helps. If it's still a problem then I can put my old "bad" drive in and see what happens.
 
Last edited:

Pie

Dabbler
Joined
Jan 19, 2013
Messages
38
I did a quick wipe, rebooted and this is what I've got (same as before):
Code:
[root@freenas] ~# zpool status
  pool: Root
state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Sat Nov 15 19:17:13 2014
        10.4G scanned out of 4.38T at 95.8M/s, 13h18m to go
        3.46G resilvered, 0.23% done
config:

        NAME                                            STATE     READ WRITE CKSUM
        Root                                            ONLINE       0     0     0
          raidz1-0                                      ONLINE       0     0     0
            gptid/4ebb4921-6b95-11e4-afcf-08606e69c5e2  ONLINE       0     0     4  (resilvering)
            gptid/b19b1a9f-6cf6-11e2-b19a-08606e69c5e2  ONLINE       0     0     0
            gptid/b284e7f4-6cf6-11e2-b19a-08606e69c5e2  ONLINE       0     0     0

errors: No known data errors


ihxSBoD.jpg


If this doesn't work (I'll know in ~6 hours) then I'll try putting my old drive back in and remove the new one. Hopefully that gives me some insight into what's going on. There is definitely some confusion between freenas and zpool.

Are there any logs that could point to where the discrepancy lies?
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
Not really any logs. Part of it is FreeNAS trying to protect you. The other is zfs detecting that the disk is part of the pool so resilvering behind the GUI's back I think. That's why the replace procedure has to be followed exactly as per the manual.

If you didn't have a degraded pool and have to replace in the GUI. You're likely to have just looped back to where you were. Might as well just shut down, pull the drive and start again instead of wasting 13hrs. You want a missing device to show up that you can offline, so you can get in sync with the GUI, then a disk that it can't recognize to use as a replacement. But maybe you'll get lucky.
 

Pie

Dabbler
Joined
Jan 19, 2013
Messages
38
Might as well just shut down, pull the drive and start again instead of wasting 13hrs. You want a missing device to show up that you can offline, so you can get in sync with the GUI, then a disk that it can't recognize to use as a replacement.

Do you mean:
1) Wipe the new drive.
2) Shutdown.
3) Re-install the old drive.
4) Offline the drive from the GUI.
5) Shutdown.
6) Remove the old drive, put the new drive back in.
7) Pray that FreeNAS starts resilvering the new drive for the last time.

???

But maybe you'll get lucky.
I doubt it. ;)
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
I wouldn't touch the failing drive. If you pull the new replacement drive and reboot. Your pool will have a 'zpool status' with a missing device. At that point you can offline that device. I'd either zero that replacement drive in a different box, or pull the remaining pool drives and zero it under FreeNAS using the GUI or dd, whatever is easiest for you. If you've done things right. When you reconnect everything you'll have a pool with an offlined drive, and the fresh drive will show up in the replace drive list for you to utilize. What we don't want is zfs confusing the gui and trying to help out along the way.
 

Pie

Dabbler
Joined
Jan 19, 2013
Messages
38
Okay, here's what I did... Fingers crossed on the reboot!

Code:
[root@freenas] ~# zpool status
  pool: Root
state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Sat Nov 15 19:17:13 2014
        1.05T scanned out of 4.38T at 190M/s, 5h7m to go
        168G resilvered, 23.96% done
config:

        NAME                                            STATE     READ WRITE CKSUM
        Root                                            DEGRADED     0     0     0
          raidz1-0                                      DEGRADED     0     0     0
            5508971877123744593                         UNAVAIL      0     0     0  was /dev/gptid/4ebb4921-6b95-11e4-afcf-08606e69c5e2
            gptid/b19b1a9f-6cf6-11e2-b19a-08606e69c5e2  ONLINE       0     0     0
            gptid/b284e7f4-6cf6-11e2-b19a-08606e69c5e2  ONLINE       0     0     0

errors: No known data errors

[root@freenas] ~# zpool offline Root 5508971877123744593

[root@freenas] ~# zpool status
  pool: Root
state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Sat Nov 15 19:17:13 2014
        1.05T scanned out of 4.38T at 84.7M/s, 11h28m to go
        168G resilvered, 23.97% done
config:

        NAME                                            STATE     READ WRITE CKSUM
        Root                                            DEGRADED     0     0     0
          raidz1-0                                      DEGRADED     0     0     0
            5508971877123744593                         OFFLINE      0     0     0  was /dev/gptid/4ebb4921-6b95-11e4-afcf-08606e69c5e2
            gptid/b19b1a9f-6cf6-11e2-b19a-08606e69c5e2  ONLINE       0     0     0
            gptid/b284e7f4-6cf6-11e2-b19a-08606e69c5e2  ONLINE       0     0     0

errors: No known data errors
 

Pie

Dabbler
Joined
Jan 19, 2013
Messages
38
On reboot I've got:

Code:
[root@freenas] ~# zpool status
  pool: Root
state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Sat Nov 15 19:17:13 2014
        1.14T scanned out of 4.38T at 191M/s, 4h57m to go
        168G resilvered, 26.11% done
config:

        NAME                                            STATE     READ WRITE CKSUM
        Root                                            DEGRADED     0     0     0
          raidz1-0                                      DEGRADED     0     0     0
            5508971877123744593                         OFFLINE      0     0     0  was /dev/gptid/4ebb4921-6b95-11e4-afcf-08606e69c5e2
            gptid/b19b1a9f-6cf6-11e2-b19a-08606e69c5e2  ONLINE       0     0     0
            gptid/b284e7f4-6cf6-11e2-b19a-08606e69c5e2  ONLINE       0     0     0

errors: No known data errors
[root@freenas] ~#


f6oXAej.png


Starting the FreeNAS replace now.
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
Cool. I would have offlined in the GUI. But it should work fine.
 

Pie

Dabbler
Joined
Jan 19, 2013
Messages
38
Here's the result:
0Vqli5b.png


Output from FreeNAS console:
Code:
Nov 15 21:22:51 freenas kernel: GEOM_ELI: Device ada0p1.eli destroyed.
Nov 15 21:22:51 freenas kernel: GEOM_ELI: Detached ada0p1.eli on last close.
Nov 15 21:22:51 freenas notifier: geli: No such device: /dev/ada0p1.
Nov 15 21:22:52 freenas notifier: 1+0 records in
Nov 15 21:22:52 freenas notifier: 1+0 records out
Nov 15 21:22:52 freenas notifier: 1048576 bytes transferred in 0.666854 secs (1572422 bytes/sec)
Nov 15 21:22:52 freenas notifier: dd: /dev/ada0: short write on character device
Nov 15 21:22:52 freenas notifier: dd: /dev/ada0: end of device
Nov 15 21:22:52 freenas notifier: 5+0 records in
Nov 15 21:22:52 freenas notifier: 4+1 records out
Nov 15 21:22:52 freenas notifier: 4677632 bytes transferred in 0.076572 secs (61088069 bytes/sec)
Nov 15 21:22:53 freenas notifier: swapoff: /dev/ada0p1.eli: No such file or directory
Nov 15 21:22:53 freenas notifier: geli: No such device: /dev/ada0p1.
Nov 15 21:22:53 freenas manage.py: [middleware.exceptions:38] [MiddlewareError: Disk replacement failed: "invalid vdev specification, use '-f' to override the following errors:, /dev/gptid/9d3de5c4-6d50-11e4-b498-08606e69c5e2 is part of active pool 'Root', "]


Output from command line:
Code:
[root@freenas] ~# zpool status
  pool: Root
state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Sat Nov 15 19:17:13 2014
        1.22T scanned out of 4.38T at 187M/s, 4h55m to go
        168G resilvered, 27.89% done
config:

        NAME                                            STATE     READ WRITE CKSUM
        Root                                            DEGRADED     0     0     0
          raidz1-0                                      DEGRADED     0     0     0
            5508971877123744593                         OFFLINE      0     0     0  was /dev/gptid/4ebb4921-6b95-11e4-afcf-08606e69c5e2
            gptid/b19b1a9f-6cf6-11e2-b19a-08606e69c5e2  ONLINE       0     0     0
            gptid/b284e7f4-6cf6-11e2-b19a-08606e69c5e2  ONLINE       0     0     0

errors: No known data errors


So.. back to square one? There's at least more information in the FreeNAS output. Invalid vdev specification because it's already a part of root? 9d3e35c4 might be my old drive. Maybe FreeNAS still thinks it's around somewhere?
 

Pie

Dabbler
Joined
Jan 19, 2013
Messages
38
I'm trying a full wipe with 0s and then a reboot and another replace.
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
Nope. The backup gptid is detected. The wipe wasn't enough. I wish I knew the dd commands to just hit the necessary sectors, but I don't. Just full out zero it. This is FN saving you from yourself.
 

Pie

Dabbler
Joined
Jan 19, 2013
Messages
38
I finished the wipe and got this:
Code:
[root@freenas] ~# zpool status
  pool: Root
state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Sat Nov 15 19:17:13 2014
        1.71T scanned out of 4.38T at 101M/s, 7h44m to go
        168G resilvered, 38.93% done
config:

        NAME                                            STATE     READ WRITE CKSUM
        Root                                            DEGRADED     0     0     0
          raidz1-0                                      DEGRADED     0     0     0
            5508971877123744593                         OFFLINE      0     0     0  was /dev/gptid/4ebb4921-6b95-11e4-afcf-08606e69c5e2
            gptid/b19b1a9f-6cf6-11e2-b19a-08606e69c5e2  ONLINE       0     0     0
            gptid/b284e7f4-6cf6-11e2-b19a-08606e69c5e2  ONLINE       0     0     0

errors: No known data errors


GcfHB8h.png


And doing a "replace disk" gives me this:
Code:
[root@freenas] ~# zpool status
  pool: Root
state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Sat Nov 15 22:25:19 2014
        1.73G scanned out of 4.38T at 61.0M/s, 20h56m to go
        579M resilvered, 0.04% done
config:

        NAME                                              STATE     READ WRITE CKSUM
        Root                                              DEGRADED     0     0     0
          raidz1-0                                        DEGRADED     0     0     0
            replacing-0                                   OFFLINE      0     0     0
              5508971877123744593                         OFFLINE      0     0     0  was /dev/gptid/4ebb4921-6b95-11e4-afcf-08606e69c5e2
              gptid/4585e79d-6d59-11e4-b30c-08606e69c5e2  ONLINE       0     0     0  (resilvering)
            gptid/b19b1a9f-6cf6-11e2-b19a-08606e69c5e2    ONLINE       0     0     0
            gptid/b284e7f4-6cf6-11e2-b19a-08606e69c5e2    ONLINE       0     0     0

errors: No known data errors


RTaV3Tk.png


So, progress! I'll let it finish resilvering and then I'll see what happens tomorrow. Hopefully I'm down to 3 drives online and no errors!

That OFFLINE drive gives me the option to "detach" and I'm very tempted to try it out..
 

Pie

Dabbler
Joined
Jan 19, 2013
Messages
38
The re-silvering completed and now zpool status and the FreeNAS gui are in agreement again. Should I run a detach command on the offline drive and reboot?

JFXIsiW.png


Code:
[root@freenas] ~# zpool status
  pool: Root
state: DEGRADED
status: The pool is formatted using a legacy on-disk format.  The pool can
        still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
        pool will no longer be accessible on software that does not support feature
        flags.
  scan: resilvered 169G in 6h49m with 0 errors on Sun Nov 16 05:14:57 2014
config:

        NAME                                            STATE     READ WRITE CKSUM
        Root                                            DEGRADED     0     0     0
          raidz1-0                                      DEGRADED     0     0     0
            replacing-0                                 UNAVAIL      0     0     0
              5508971877123744593                       OFFLINE      0     0     0  was /dev/gptid/4ebb4921-6b95-11e4-afcf-08606e69c5e2
              1526374017946971693                       REMOVED      0     0     0  was /dev/gptid/4585e79d-6d59-11e4-b30c-08606e69c5e2
            gptid/b19b1a9f-6cf6-11e2-b19a-08606e69c5e2  ONLINE       0     0     0
            gptid/b284e7f4-6cf6-11e2-b19a-08606e69c5e2  ONLINE       0     0     0

errors: No known data errors


I just noticed this in the log as well:
Code:
Nov 15 23:18:48 freenas kernel: (ada0:ata2:0:1:0): FLUSHCACHE48. ACB: ea 00 00 00 00 40 00 00 00 00 00 00
Nov 15 23:18:48 freenas kernel: (ada0:ata2:0:1:0): CAM status: Command timeout
Nov 15 23:18:48 freenas kernel: (ada0:ata2:0:1:0): Retrying command
Nov 15 23:18:48 freenas kernel: ada0 at ata2 bus 0 scbus0 target 1 lun 0
Nov 15 23:18:48 freenas kernel: ada0: <WDC WD30EZRX-00D8PB0 80.00A80> s/n WD-WCC4N0974177 detached


Any clues as to why it detached that drive?
 

Pie

Dabbler
Joined
Jan 19, 2013
Messages
38
I found a few posts saying that the CAM status: Command timeout can be due to bad power/sata cable so I've swaped them out with new ones. After a reboot the drive is back online again and it's resilvering (no surprise there).
 
Status
Not open for further replies.
Top