How to put same offline disk back

Status
Not open for further replies.

Stephens

Patron
Joined
Jun 19, 2012
Messages
496
I'm going to requote from noobsauce80's post (2nd in this thread)

I'd wait until one of the more knowledgable guys comment. Be careful what you do on your own without the advice of one of our pros since you may cause irreversible damage to the zpool. Just don't panic and do anything crazy. If you lost your data you'll have plenty of time to panic after all options are exhausted.

You should slow down on swapping drives and trying stuff except for what Paleon suggests, at least until he has exhausted everything he can think of. It's difficult trying to offer remote help when the remote user is "trying" things in between your instructions. It makes the baseline a moving target, and often does more harm than good.
 

paleoN

Wizard
Joined
Apr 22, 2012
Messages
1,403
You should slow down on swapping drives and trying stuff except for what Paleon suggests, at least until he has exhausted everything he can think of. It's difficult trying to offer remote help when the remote user is "trying" things in between your instructions. It makes the baseline a moving target, and often does more harm than good.
Exactly.

If I understand correctly this is what's currently in the system:
  • ada0 2TB untouched, part of the original pool
  • ada1 2TB untouched, part of the original pool
  • ada2 2TB not part of the pool, which was successfully replaced by the, now failed, 3TB drive
What happened to the fourth drive, the "2[sup]nd[/sup] 2TB drive"? Add drive four to the system and run the following commands:

Code:
zpool status -v

zpool import

camcontrol devlist

glabel status
 

zzhangz

Dabbler
Joined
Dec 26, 2012
Messages
16
If I understand correctly this is what's currently in the system:
  • ada0 2TB untouched, part of the original pool
  • ada1 2TB untouched, part of the original pool
  • ada2 2TB not part of the pool, which was successfully replaced by the, now failed, 3TB drive
What happened to the fourth drive, the "2[sup]nd[/sup] 2TB drive"? Add drive four to the system and run the following commands:
Your understanding for the ada0 and ada1 is correct
ada2 2TB, which was the original 2TB drive in the zpool.
ada3, I didn't connect any drive now. I still have original 2TB drive from ada3. The 3TB drive at ada3 was failed during ada2 3TB resilvering process.
Do I need to connect original 2TB drive at ada3?

Thanks
 

paleoN

Wizard
Joined
Apr 22, 2012
Messages
1,403
Do I need to connect original 2TB drive at ada3?
Might as well. Then run the updated commands from [post=47226]post #22[/post].
 

pete_c20

Dabbler
Joined
Nov 23, 2012
Messages
23
A possible insurance policy for you - If a path forwards is uncertain, and the data is valuable, is it worth buying another set of disks and doing a block copy of them? That may allow to you to attempt a rebuild with copies and not the originals, and have more than one go at it if need be.

The method is less expensive than it sounds, as the disks will have a resale value.

I watch with interest. Good luck zzhangz. I wish I knew more about ZFS to help out.

Just wondering why the 3TB failed, and showed 'dead' in one move. It's not something silly like an intermittent connection?
 

zzhangz

Dabbler
Joined
Dec 26, 2012
Messages
16
Might as well. Then run the updated commands from [post=47226]post #22[/post].
Code:
[root@freenas ~]# zpool status -v                                               
no pools available                                                              
[root@freenas ~]# zpool import                                                  
   pool: ZZSHARE                                                                
     id: 55706232574185011                                                      
  state: UNAVAIL                                                                
 status: One or more devices are missing from the system.                       
 action: The pool cannot be imported. Attach the missing                        
        devices and try again.                                                  
   see: http://www.sun.com/msg/ZFS-8000-3C                                      
 config:                                                                        
                                                                                
        ZZSHARE                                         UNAVAIL  insufficient re
plicas                                                                          
          raidz1-0                                      UNAVAIL  insufficient re
plicas                                                                          
            2417920478711271033                         UNAVAIL  cannot open    
            15515033115496397770                        UNAVAIL  cannot open    
            gptid/8fb92911-64a4-11e1-84d5-6c626d384807  ONLINE                  
            gptid/903af3f2-64a4-11e1-84d5-6c626d384807  ONLINE                  
root@freenas ~]# camcontrol devlist                                            
<ST2000DM001-9YN164 CC96>          at scbus0 target 0 lun 0 (pass0,ada0)        
<Hitachi HDS5C3020ALA632 ML6OA180>  at scbus1 target 0 lun 0 (pass1,ada1)       
<Hitachi HDS5C3020ALA632 ML6OA580>  at scbus2 target 0 lun 0 (pass2,ada2)       
<ST2000DM001-9YN164 CC96>          at scbus3 target 0 lun 0 (pass3,ada3)        
<USB DISK 2.0 0403>                at scbus6 target 0 lun 0 (pass4,da0)         
[root@freenas ~]# glabel status                                                 
                                      Name  Status  Components                  
gptid/903af3f2-64a4-11e1-84d5-6c626d384807     N/A  ada0p2                      
gptid/8fb92911-64a4-11e1-84d5-6c626d384807     N/A  ada1p2                      
gptid/8ef6e986-64a4-11e1-84d5-6c626d384807     N/A  ada2p2                      
gptid/015acb32-4d6b-11e2-ab88-6c626d384807     N/A  ada3p2                      
                             ufs/FreeNASs3     N/A  da0s3                       
                             ufs/FreeNASs4     N/A  da0s4                       
                    ufsid/4fd8a9a73e516cd1     N/A  da0s1a                      
                            ufs/FreeNASs1a     N/A  da0s1a                      
                            ufs/FreeNASs2a     N/A  da0s2a                      

ada0 and ada1 are original 2TB drives, never been disconnected
ada2 and ada3 are original 2TB drives, been offlined and disconnected. I put them back at same position.
Let me explain what I did again.
1. offline 2TB drive at ada3, replaced by a 3TB drive, successfully resilvering.
2. offline 2TB drive at ada2, replaced by a 3TB drive, during the resilvering process (probable about 15%), 3TB drive at ada3 was failed.
3. Based on paleoN's instruction, I put two offlined original 2TB drives back to each original positions and run the commands.
 

zzhangz

Dabbler
Joined
Dec 26, 2012
Messages
16
A possible insurance policy for you - If a path forwards is uncertain, and the data is valuable, is it worth buying another set of disks and doing a block copy of them? That may allow to you to attempt a rebuild with copies and not the originals, and have more than one go at it if need be.

The method is less expensive than it sounds, as the disks will have a resale value.

I watch with interest. Good luck zzhangz. I wish I knew more about ZFS to help out.

Just wondering why the 3TB failed, and showed 'dead' in one move. It's not something silly like an intermittent connection?

Yes, I'm pretty sure it's hard drive issue. It shows offline at FreeNAS. When I restart the system, bios can't find it.
 

paleoN

Wizard
Joined
Apr 22, 2012
Messages
1,403
You will likely need to setup SSH as the output is long and I need to see all of label 3's information for the following:
Code:
zdb -l /dev/ada0p2

zdb -l /dev/ada2p2
The labels should match themselves on ada0. Labels on ada2 should match themselves as well and be different from ada0.
 

zzhangz

Dabbler
Joined
Dec 26, 2012
Messages
16
You will likely need to setup SSH as the output is long and I need to see all of label 3's information for the following.

Code:
/root$ zdb -l /dev/ada0p2
--------------------------------------------
LABEL 0
--------------------------------------------
    version: 15
    name: 'ZZSHARE'
    state: 0
    txg: 769897
    pool_guid: 55706232574185011
    hostid: 2153463870
    hostname: ''
    top_guid: 481540924511517899
    guid: 8131109554781596309
    vdev_children: 1
    vdev_tree:
        type: 'raidz'
        id: 0
        guid: 481540924511517899
        nparity: 1
        metaslab_array: 23
        metaslab_shift: 36
        ashift: 12
        asize: 7992986566656
        is_log: 0
        children[0]:
            type: 'disk'
            id: 0
            guid: 2417920478711271033
            path: '/dev/gptid/c016d7c5-4d75-11e2-a587-6c626d384807'
            phys_path: '/dev/gptid/c016d7c5-4d75-11e2-a587-6c626d384807'
            whole_disk: 1
            not_present: 1
            DTL: 166
        children[1]:
            type: 'disk'
            id: 1
            guid: 15515033115496397770
            path: '/dev/gptid/69a88e8c-4dcd-11e2-b56f-6c626d384807'
            phys_path: '/dev/gptid/69a88e8c-4dcd-11e2-b56f-6c626d384807'
            whole_disk: 1
            DTL: 111
        children[2]:
            type: 'disk'
            id: 2
            guid: 11039172476759822341
            path: '/dev/gptid/8fb92911-64a4-11e1-84d5-6c626d384807'
            phys_path: '/dev/gptid/8fb92911-64a4-11e1-84d5-6c626d384807'
            whole_disk: 0
            DTL: 109
        children[3]:
            type: 'disk'
            id: 3
            guid: 8131109554781596309
            path: '/dev/gptid/903af3f2-64a4-11e1-84d5-6c626d384807'
            phys_path: '/dev/gptid/903af3f2-64a4-11e1-84d5-6c626d384807'
            whole_disk: 0
            DTL: 108
--------------------------------------------
LABEL 1
--------------------------------------------
    version: 15
    name: 'ZZSHARE'
    state: 0
    txg: 769897
    pool_guid: 55706232574185011
    hostid: 2153463870
    hostname: ''
    top_guid: 481540924511517899
    guid: 8131109554781596309
    vdev_children: 1
    vdev_tree:
        type: 'raidz'
        id: 0
        guid: 481540924511517899
        nparity: 1
        metaslab_array: 23
        metaslab_shift: 36
        ashift: 12
        asize: 7992986566656
        is_log: 0
        children[0]:
            type: 'disk'
            id: 0
            guid: 2417920478711271033
            path: '/dev/gptid/c016d7c5-4d75-11e2-a587-6c626d384807'
            phys_path: '/dev/gptid/c016d7c5-4d75-11e2-a587-6c626d384807'
            whole_disk: 1
            not_present: 1
            DTL: 166
        children[1]:
            type: 'disk'
            id: 1
            guid: 15515033115496397770
            path: '/dev/gptid/69a88e8c-4dcd-11e2-b56f-6c626d384807'
            phys_path: '/dev/gptid/69a88e8c-4dcd-11e2-b56f-6c626d384807'
            whole_disk: 1
            DTL: 111
        children[2]:
            type: 'disk'
            id: 2
            guid: 11039172476759822341
            path: '/dev/gptid/8fb92911-64a4-11e1-84d5-6c626d384807'
            phys_path: '/dev/gptid/8fb92911-64a4-11e1-84d5-6c626d384807'
            whole_disk: 0
            DTL: 109
        children[3]:
            type: 'disk'
            id: 3
            guid: 8131109554781596309
            path: '/dev/gptid/903af3f2-64a4-11e1-84d5-6c626d384807'
            phys_path: '/dev/gptid/903af3f2-64a4-11e1-84d5-6c626d384807'
            whole_disk: 0
            DTL: 108
--------------------------------------------
LABEL 2
--------------------------------------------
    version: 15
    name: 'ZZSHARE'
    state: 0
    txg: 769897
    pool_guid: 55706232574185011
    hostid: 2153463870
    hostname: ''
    top_guid: 481540924511517899
    guid: 8131109554781596309
    vdev_children: 1
    vdev_tree:
        type: 'raidz'
        id: 0
        guid: 481540924511517899
        nparity: 1
        metaslab_array: 23
        metaslab_shift: 36
        ashift: 12
        asize: 7992986566656
        is_log: 0
        children[0]:
            type: 'disk'
            id: 0
            guid: 2417920478711271033
            path: '/dev/gptid/c016d7c5-4d75-11e2-a587-6c626d384807'
            phys_path: '/dev/gptid/c016d7c5-4d75-11e2-a587-6c626d384807'
            whole_disk: 1
            not_present: 1
            DTL: 166
        children[1]:
            type: 'disk'
            id: 1
            guid: 15515033115496397770
            path: '/dev/gptid/69a88e8c-4dcd-11e2-b56f-6c626d384807'
            phys_path: '/dev/gptid/69a88e8c-4dcd-11e2-b56f-6c626d384807'
            whole_disk: 1
            DTL: 111
        children[2]:
            type: 'disk'
            id: 2
            guid: 11039172476759822341
            path: '/dev/gptid/8fb92911-64a4-11e1-84d5-6c626d384807'
            phys_path: '/dev/gptid/8fb92911-64a4-11e1-84d5-6c626d384807'
            whole_disk: 0
            DTL: 109
        children[3]:
            type: 'disk'
            id: 3
            guid: 8131109554781596309
            path: '/dev/gptid/903af3f2-64a4-11e1-84d5-6c626d384807'
            phys_path: '/dev/gptid/903af3f2-64a4-11e1-84d5-6c626d384807'
            whole_disk: 0
            DTL: 108
--------------------------------------------
LABEL 3
--------------------------------------------
    version: 15
    name: 'ZZSHARE'
    state: 0
    txg: 769897
    pool_guid: 55706232574185011
    hostid: 2153463870
    hostname: ''
    top_guid: 481540924511517899
    guid: 8131109554781596309
    vdev_children: 1
    vdev_tree:
        type: 'raidz'
        id: 0
        guid: 481540924511517899
        nparity: 1
        metaslab_array: 23
        metaslab_shift: 36
        ashift: 12
        asize: 7992986566656
        is_log: 0
        children[0]:
            type: 'disk'
            id: 0
            guid: 2417920478711271033
            path: '/dev/gptid/c016d7c5-4d75-11e2-a587-6c626d384807'
            phys_path: '/dev/gptid/c016d7c5-4d75-11e2-a587-6c626d384807'
            whole_disk: 1
            not_present: 1
            DTL: 166
        children[1]:
            type: 'disk'
            id: 1
            guid: 15515033115496397770
            path: '/dev/gptid/69a88e8c-4dcd-11e2-b56f-6c626d384807'
            phys_path: '/dev/gptid/69a88e8c-4dcd-11e2-b56f-6c626d384807'
            whole_disk: 1
            DTL: 111
        children[2]:
            type: 'disk'
            id: 2
            guid: 11039172476759822341
            path: '/dev/gptid/8fb92911-64a4-11e1-84d5-6c626d384807'
            phys_path: '/dev/gptid/8fb92911-64a4-11e1-84d5-6c626d384807'
            whole_disk: 0
            DTL: 109
        children[3]:
            type: 'disk'
            id: 3
            guid: 8131109554781596309
            path: '/dev/gptid/903af3f2-64a4-11e1-84d5-6c626d384807'
            phys_path: '/dev/gptid/903af3f2-64a4-11e1-84d5-6c626d384807'
            whole_disk: 0
            DTL: 108


Code:
/root$ zdb -l /dev/ada2p2
--------------------------------------------
LABEL 0
--------------------------------------------
    version: 15
    name: 'ZZSHARE'
    state: 0
    txg: 761257
    pool_guid: 55706232574185011
    hostid: 2153463870
    hostname: 'freenas.local'
    top_guid: 481540924511517899
    guid: 9778957885652055670
    vdev_children: 1
    vdev_tree:
        type: 'raidz'
        id: 0
        guid: 481540924511517899
        nparity: 1
        metaslab_array: 23
        metaslab_shift: 36
        ashift: 12
        asize: 7992986566656
        is_log: 0
        children[0]:
            type: 'disk'
            id: 0
            guid: 2417920478711271033
            path: '/dev/gptid/c016d7c5-4d75-11e2-a587-6c626d384807'
            phys_path: '/dev/gptid/c016d7c5-4d75-11e2-a587-6c626d384807'
            whole_disk: 1
            DTL: 166
        children[1]:
            type: 'disk'
            id: 1
            guid: 9778957885652055670
            path: '/dev/gptid/8ef6e986-64a4-11e1-84d5-6c626d384807'
            phys_path: '/dev/gptid/8ef6e986-64a4-11e1-84d5-6c626d384807'
            whole_disk: 0
            DTL: 110
        children[2]:
            type: 'disk'
            id: 2
            guid: 11039172476759822341
            path: '/dev/gptid/8fb92911-64a4-11e1-84d5-6c626d384807'
            phys_path: '/dev/gptid/8fb92911-64a4-11e1-84d5-6c626d384807'
            whole_disk: 0
            DTL: 109
        children[3]:
            type: 'disk'
            id: 3
            guid: 8131109554781596309
            path: '/dev/gptid/903af3f2-64a4-11e1-84d5-6c626d384807'
            phys_path: '/dev/gptid/903af3f2-64a4-11e1-84d5-6c626d384807'
            whole_disk: 0
            DTL: 108
--------------------------------------------
LABEL 1
--------------------------------------------
    version: 15
    name: 'ZZSHARE'
    state: 0
    txg: 761257
    pool_guid: 55706232574185011
    hostid: 2153463870
    hostname: 'freenas.local'
    top_guid: 481540924511517899
    guid: 9778957885652055670
    vdev_children: 1
    vdev_tree:
        type: 'raidz'
        id: 0
        guid: 481540924511517899
        nparity: 1
        metaslab_array: 23
        metaslab_shift: 36
        ashift: 12
        asize: 7992986566656
        is_log: 0
        children[0]:
            type: 'disk'
            id: 0
            guid: 2417920478711271033
            path: '/dev/gptid/c016d7c5-4d75-11e2-a587-6c626d384807'
            phys_path: '/dev/gptid/c016d7c5-4d75-11e2-a587-6c626d384807'
            whole_disk: 1
            DTL: 166
        children[1]:
            type: 'disk'
            id: 1
            guid: 9778957885652055670
            path: '/dev/gptid/8ef6e986-64a4-11e1-84d5-6c626d384807'
            phys_path: '/dev/gptid/8ef6e986-64a4-11e1-84d5-6c626d384807'
            whole_disk: 0
            DTL: 110
        children[2]:
            type: 'disk'
            id: 2
            guid: 11039172476759822341
            path: '/dev/gptid/8fb92911-64a4-11e1-84d5-6c626d384807'
            phys_path: '/dev/gptid/8fb92911-64a4-11e1-84d5-6c626d384807'
            whole_disk: 0
            DTL: 109
        children[3]:
            type: 'disk'
            id: 3
            guid: 8131109554781596309
            path: '/dev/gptid/903af3f2-64a4-11e1-84d5-6c626d384807'
            phys_path: '/dev/gptid/903af3f2-64a4-11e1-84d5-6c626d384807'
            whole_disk: 0
            DTL: 108
--------------------------------------------
LABEL 2
--------------------------------------------
    version: 15
    name: 'ZZSHARE'
    state: 0
    txg: 761257
    pool_guid: 55706232574185011
    hostid: 2153463870
    hostname: 'freenas.local'
    top_guid: 481540924511517899
    guid: 9778957885652055670
    vdev_children: 1
    vdev_tree:
        type: 'raidz'
        id: 0
        guid: 481540924511517899
        nparity: 1
        metaslab_array: 23
        metaslab_shift: 36
        ashift: 12
        asize: 7992986566656
        is_log: 0
        children[0]:
            type: 'disk'
            id: 0
            guid: 2417920478711271033
            path: '/dev/gptid/c016d7c5-4d75-11e2-a587-6c626d384807'
            phys_path: '/dev/gptid/c016d7c5-4d75-11e2-a587-6c626d384807'
            whole_disk: 1
            DTL: 166
        children[1]:
            type: 'disk'
            id: 1
            guid: 9778957885652055670
            path: '/dev/gptid/8ef6e986-64a4-11e1-84d5-6c626d384807'
            phys_path: '/dev/gptid/8ef6e986-64a4-11e1-84d5-6c626d384807'
            whole_disk: 0
            DTL: 110
        children[2]:
            type: 'disk'
            id: 2
            guid: 11039172476759822341
            path: '/dev/gptid/8fb92911-64a4-11e1-84d5-6c626d384807'
            phys_path: '/dev/gptid/8fb92911-64a4-11e1-84d5-6c626d384807'
            whole_disk: 0
            DTL: 109
        children[3]:
            type: 'disk'
            id: 3
            guid: 8131109554781596309
            path: '/dev/gptid/903af3f2-64a4-11e1-84d5-6c626d384807'
            phys_path: '/dev/gptid/903af3f2-64a4-11e1-84d5-6c626d384807'
            whole_disk: 0
            DTL: 108
--------------------------------------------
LABEL 3
--------------------------------------------
    version: 15
    name: 'ZZSHARE'
    state: 0
    txg: 761257
    pool_guid: 55706232574185011
    hostid: 2153463870
    hostname: 'freenas.local'
    top_guid: 481540924511517899
    guid: 9778957885652055670
    vdev_children: 1
    vdev_tree:
        type: 'raidz'
        id: 0
        guid: 481540924511517899
        nparity: 1
        metaslab_array: 23
        metaslab_shift: 36
        ashift: 12
        asize: 7992986566656
        is_log: 0
        children[0]:
            type: 'disk'
            id: 0
            guid: 2417920478711271033
            path: '/dev/gptid/c016d7c5-4d75-11e2-a587-6c626d384807'
            phys_path: '/dev/gptid/c016d7c5-4d75-11e2-a587-6c626d384807'
            whole_disk: 1
            DTL: 166
        children[1]:
            type: 'disk'
            id: 1
            guid: 9778957885652055670
            path: '/dev/gptid/8ef6e986-64a4-11e1-84d5-6c626d384807'
            phys_path: '/dev/gptid/8ef6e986-64a4-11e1-84d5-6c626d384807'
            whole_disk: 0
            DTL: 110
        children[2]:
            type: 'disk'
            id: 2
            guid: 11039172476759822341
            path: '/dev/gptid/8fb92911-64a4-11e1-84d5-6c626d384807'
            phys_path: '/dev/gptid/8fb92911-64a4-11e1-84d5-6c626d384807'
            whole_disk: 0
            DTL: 109
        children[3]:
            type: 'disk'
            id: 3
            guid: 8131109554781596309
            path: '/dev/gptid/903af3f2-64a4-11e1-84d5-6c626d384807'
            phys_path: '/dev/gptid/903af3f2-64a4-11e1-84d5-6c626d384807'
            whole_disk: 0
            DTL: 108
 

paleoN

Wizard
Joined
Apr 22, 2012
Messages
1,403
A few things.

Now that I look I don't see what version of FreeNAS you are running. Before we mess around with any import commands you need to be running something with the ZFS v28 code. It has a number of bug fixes and is improved with import/recovery. If you are not currently running FreeNAS 8.3/8.3-p1 go download & install to another USB stick. I would also save your current config.

Also, put in the 3TB drive that was still resilvering, not the failed one, into the slot for ada3.

Under FreeNAS 8.3 rerun:
Code:
zpool status -v

zpool import

camcontrol devlist

glabel status


A possible insurance policy for you - If a path forwards is uncertain, and the data is valuable, is it worth buying another set of disks and doing a block copy of them?
I agree!
+++++1
Last chance. All the commands were informational up to this point. Future ones may/will start making minor changes, at least, to what's left of the pool. With the state that it's in now this may thoroughly break it.

I will check in on this thread again in about 9 to 10 hours.
 

zzhangz

Dabbler
Joined
Dec 26, 2012
Messages
16
A few things.

Now that I look I don't see what version of FreeNAS you are running. Before we mess around with any import commands you need to be running something with the ZFS v28 code. It has a number of bug fixes and is improved with import/recovery. If you are not currently running FreeNAS 8.3/8.3-p1 go download & install to another USB stick. I would also save your current config.

Also, put in the 3TB drive that was still resilvering, not the failed one, into the slot for ada3.

My FreeNAS rev was 8.3.0 release before updating the drive.
I want to make sure that I understand your instruction.
You want me to put 3TB drive (resilvering, good one, at ada2 before) back to ada3?
FYI the bad 3TB drive was at ada3.

Thanks again.
 

paleoN

Wizard
Joined
Apr 22, 2012
Messages
1,403
My FreeNAS rev was 8.3.0 release before updating the drive.
Good.

You want me to put 3TB drive (resilvering, good one, at ada2 before) back to ada3?
FYI the bad 3TB drive was at ada3.
Yes, exactly.

Your labels aren't showing that the 3TB is currently resilvering, but maybe that's because it's a v15 pool?
 

zzhangz

Dabbler
Joined
Dec 26, 2012
Messages
16
After I did this, it starts resilvering process!!
Code:
zpool status -v
status: One or more devices is currently being resilvered.  The pool will       
        continue to function, possibly in a degraded state.                     
action: Wait for the resilver to complete.                                      
  scan: resilver in progress since Mon Dec 24 08:25:42 2012                     
        1.82T scanned out of 6.86T at 27.4M/s, 53h29m to go                     
        188G resilvered, 26.60% done                                            
config:                                                                         
                                                                                
        NAME                                            STATE     READ WRITE CKS
UM                                                                              
        ZZSHARE                                         DEGRADED     0     0 23.
8K                                                                              
          raidz1-0                                      DEGRADED     0     0 48.
1K                                                                              
            2417920478711271033                         UNAVAIL      0     0    
 0  was /dev/gptid/c016d7c5-4d75-11e2-a587-6c626d384807                         
            gptid/69a88e8c-4dcd-11e2-b56f-6c626d384807  ONLINE       0     0    
 0  (resilvering)                                                               
            gptid/8fb92911-64a4-11e1-84d5-6c626d384807  ONLINE       0     0    
 0  (resilvering)                                                               
            gptid/903af3f2-64a4-11e1-84d5-6c626d384807  ONLINE       0     0    
 0                                                                              


Let's wait after finished resilvering

Thanks
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
Nice job PaleoN, that took some patience ;)

zzhangz If it finishes before any other disks fail, you are a VERY lucky guy. Do a backup and buy PaleoN a years worth of gas! :D
 

bollar

Patron
Joined
Oct 28, 2012
Messages
411

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
zzhangz.. you are the first person I've seen in this forum that "lost" their data and then managed to saved it. You should consider yourself so freakin' lucky right now. Go buy a lottery ticket!

One thing you should realize is that if you had used a hardware RAID5 instead of a ZFS RAIDZ1 you would not have all of your data right now.
 

zzhangz

Dabbler
Joined
Dec 26, 2012
Messages
16
zzhangz.. you are the first person I've seen in this forum that "lost" their data and then managed to saved it. You should consider yourself so freakin' lucky right now. Go buy a lottery ticket!

One thing you should realize is that if you had used a hardware RAID5 instead of a ZFS RAIDZ1 you would not have all of your data right now.

Wait!! Not Yet.
I still don't understand that 4 disks raid-z array, how can resilver two disks at same time.
I can see the root folder through other computers, but can't open it. Also the size of the folder is only 823k, it should be about 6TB. Anyway, I will wait until finishing resilvering.
Capture.JPG
 
Status
Not open for further replies.
Top