[FreeNAS 8.2] Replacing a dead drive in zfs

Status
Not open for further replies.

OhmStyles

Cadet
Joined
Feb 6, 2013
Messages
6
It was taking forever to connect to my FreeNAS. It was never showing that there was an issue through SMART. I went by sound and replaced a drive via the GUI. It all seemed to go smoothly then I noticed that it was still degraded. I next ran zpool status -v. Here is the output:

Code:
[root@tank] /boot# zpool status -v
  pool: Tank
 state: DEGRADED
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: none requested
config:

        NAME                                              STATE     READ WRITE CKSUM
        Tank                                              DEGRADED     0     0     0
          raidz1                                          DEGRADED     0     0     0
            gptid/a9cdc49f-0c39-11e2-9903-001676d6a98b    ONLINE       0     0     0
            gptid/aa64809a-0c39-11e2-9903-001676d6a98b    ONLINE       0     0     0
            replacing                                     DEGRADED     0     0     2
              gptid/aafd2bc1-0c39-11e2-9903-001676d6a98b  OFFLINE      0     0     0
              gptid/03f81b29-5549-11e2-baec-001676d6a98b  ONLINE       0     0     0
            gptid/ab929715-0c39-11e2-9903-001676d6a98b    ONLINE       0     0     0
            gptid/ac251f8d-0c39-11e2-9903-001676d6a98b    ONLINE       0     0     0

errors: Permanent errors have been detected in the following files:


This is on a FreeNAS 8.2 box with 5 - 2 terabyte drives in zfs raidz. The drive silvered properly from what I can tell. After replacing this drive I find that ada3 is the real problem.

What is my next course of action? I need to get the RAID back then take out drive ada3 and replace it with another I have here.

In a freebsg forum i tried this also:

Code:
[root@tank] /boot# zpool detach Tank gptid/aafd2bc1-0c39-11e2-9903-001676d6a98b
cannot detach gptid/aafd2bc1-0c39-11e2-9903-001676d6a98b: no valid replicas


Those guys over there said to just do a fresh install of 8.3 and import and repair it?

What can i do before having to do that?

Thanks in advance
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Did you follow the manual and the section that gives detailed instructions on replacing a failed disk in FreeNAS?
 

paleoN

Wizard
Joined
Apr 22, 2012
Messages
1,403
Use FreeNAS 8.3 as it has the new ZFS code in it.
 

OhmStyles

Cadet
Joined
Feb 6, 2013
Messages
6
Code:
[root@freenas] ~# zpool import Tank
cannot mount '/Tank': failed to create mountpoint
cannot mount '/Tank/FTP': failed to create mountpoint
cannot mount '/Tank/Music': failed to create mountpoint
cannot mount '/Tank/Pictures': failed to create mountpoint
cannot mount '/Tank/Software': failed to create mountpoint
cannot mount '/Tank/Training': failed to create mountpoint
cannot mount '/Tank/VideoTest': failed to create mountpoint
cannot mount '/Tank/Videos': failed to create mountpoint
[root@freenas] ~#
 

OhmStyles

Cadet
Joined
Feb 6, 2013
Messages
6
so i got the pool imported back into freenas 8.3, but i am still showing degraded.
here is what i got from a zpool status -v
Code:
[root@freenas] ~# zpool status -v
  pool: Tank
 state: DEGRADED
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
  scan: none requested
config:

        NAME                                              STATE     READ WRITE C                       KSUM
        Tank                                              DEGRADED     0     0                            0
          raidz1-0                                        DEGRADED     0     0                            0
            gptid/a9cdc49f-0c39-11e2-9903-001676d6a98b    ONLINE       0     0                            0
            gptid/aa64809a-0c39-11e2-9903-001676d6a98b    ONLINE       0     0                            0
            replacing-2                                   DEGRADED     0     0                            0
              14923898138664116968                        OFFLINE      0     0                            0  was /dev/gptid/aafd2bc1-0c39-11e2-9903-001676d6a98b
              gptid/03f81b29-5549-11e2-baec-001676d6a98b  ONLINE       0     0                            0
            gptid/ab929715-0c39-11e2-9903-001676d6a98b    ONLINE       0     0                            0
            gptid/ac251f8d-0c39-11e2-9903-001676d6a98b    ONLINE       0     0                            0

errors: Permanent errors have been detected in the following files:


Thanks in advance.
 

paleoN

Wizard
Joined
Apr 22, 2012
Messages
1,403
This is on a FreeNAS 8.2 box with 5 - 2 terabyte drives in zfs raidz. The drive silvered properly from what I can tell. After replacing this drive I find that ada3 is the real problem.
Do you still have the original disk or did you wipe it already? Maybe you're lucky and any corruption is confined to ada3. Oh, you are sure it's ada3?

Code:
camcontrol devlist

glabel status

gpart show
If the original disk is still viable I would pull whatever disk gptid/03f81b29-5549-11e2-baec-001676d6a98b is, swap it with the original. Then online said disk:
Code:
zpool online Tank gptid/aafd2bc1-0c39-11e2-9903-001676d6a98b


If not try and detach the old disk:
Code:
zpool detach Tank 14923898138664116968
Then proceed to replace the actual bad disk.
 

OhmStyles

Cadet
Joined
Feb 6, 2013
Messages
6
Here is the code for camcontrol devlist:

Code:
<SAMSUNG HD204UI 1AQ10001>         at scbus1 target 0 lun 0 (pass0,ada0)
<SAMSUNG HD204UI 1AQ10001>         at scbus2 target 0 lun 0 (pass1,ada1)
<WDC WD20EFRX-68AX9N0 80.00A80>    at scbus3 target 0 lun 0 (pass2,ada2)
<SAMSUNG HD204UI 1AQ10001>         at scbus4 target 0 lun 0 (pass3,ada3)
<_NEC DVD_RW ND-3520A 1.04>        at scbus5 target 0 lun 0 (pass4,cd0)
<SAMSUNG HD204UI 1AQ10001>         at scbus9 target 0 lun 0 (pass5,ada4)
< USB Flash Memory PMAP>           at scbus10 target 0 lun 0 (pass6,da0)


Here is the output from glabel status

Code:
[root@freenas] ~# glabel status
                                      Name  Status  Components
gptid/a9cdc49f-0c39-11e2-9903-001676d6a98b     N/A  ada0p2
gptid/aa64809a-0c39-11e2-9903-001676d6a98b     N/A  ada1p2
gptid/03f81b29-5549-11e2-baec-001676d6a98b     N/A  ada2p2
gptid/ab929715-0c39-11e2-9903-001676d6a98b     N/A  ada3p2
gptid/ac251f8d-0c39-11e2-9903-001676d6a98b     N/A  ada4p2
                             ufs/FreeNASs3     N/A  da0s3
                             ufs/FreeNASs4     N/A  da0s4
                            ufs/FreeNASs1a     N/A  da0s1a


Last, but not least gpart show

Code:
[root@freenas] ~# gpart show
=>        34  3907029101  ada0  GPT  (1.8T)
          34          94        - free -  (47k)
         128     4194304     1  freebsd-swap  (2.0G)
     4194432  3902834703     2  freebsd-zfs  (1.8T)

=>        34  3907029101  ada1  GPT  (1.8T)
          34          94        - free -  (47k)
         128     4194304     1  freebsd-swap  (2.0G)
     4194432  3902834703     2  freebsd-zfs  (1.8T)

=>        34  3907029101  ada2  GPT  (1.8T)
          34          94        - free -  (47k)
         128     4194304     1  freebsd-swap  (2.0G)
     4194432  3902834703     2  freebsd-zfs  (1.8T)

=>        34  3907029101  ada3  GPT  (1.8T)
          34          94        - free -  (47k)
         128     4194304     1  freebsd-swap  (2.0G)
     4194432  3902834703     2  freebsd-zfs  (1.8T)

=>        34  3907029101  ada4  GPT  (1.8T)
          34          94        - free -  (47k)
         128     4194304     1  freebsd-swap  (2.0G)
     4194432  3902834703     2  freebsd-zfs  (1.8T)

=>      63  31252417  da0  MBR  (14G)
        63   1930257    1  freebsd  [active]  (942M)
   1930320        63       - free -  (31k)
   1930383   1930257    2  freebsd  (942M)
   3860640      3024    3  freebsd  (1.5M)
   3863664     41328    4  freebsd  (20M)
   3904992  27347488       - free -  (13G)

=>      0  1930257  da0s1  BSD  (942M)
        0       16         - free -  (8.0k)
       16  1930241      1  !0  (942M)


I have the original disc still, but it may too have had issues. It is pulled from the computer currently.
When it was in the zfs the response time was horrible all of a sudden.
I pulled the one drive, did a sliver then got stuck with one hanging and the zfs sitting in a degraded state.
Then SMART comes along and says that /dev/ada3, 2 currently unreadable (pending) sections across the monitor every 30 minutes. So i am weary of putting back the other one.

An Aside is this:
I have an extra 2 tb red drive here and 2 more on the way.
I also have an external esata dual drive bay coming from newegg.
I am thinking about just taking two drives and moving the 2.5 tb off the frenas on to those and rebuilding it with some more redundancy. I am going to see what comes of this first.
Let me know if you see anything in there that is valuable before I try to do a detach.
Thanks a lot!
 

paleoN

Wizard
Joined
Apr 22, 2012
Messages
1,403
When it was in the zfs the response time was horrible all of a sudden.
Never mind then if that drive was failing.

I am thinking about just taking two drives and moving the 2.5 tb off the frenas on to those and rebuilding it with some more redundancy.
+ 1. Single-parity array can't protect you from a failed drive and additional read errors on other drives. I would backup before replacing ada3 and you would need to if you redo the array anyway.
 
Status
Not open for further replies.
Top