zpool import I/O error

Status
Not open for further replies.

ezh

Cadet
Joined
Sep 1, 2011
Messages
7
Hello. I loss my zpool :(
It was 4x2TB zfs raidz1 pool.
I got one hdd completely hardware dead, and the other three:
Code:
[root@freenas] ~# zpool import
  pool: main_vol
    id: 10036867675565415040
  state: FAULTED
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
        devices and try again.
        The pool may be active on another system, but can be imported using
        the '-f' flag.
  see: http://illumos.org/msg/ZFS-8000-3C
config:
 
        main_vol                                        FAULTED  corrupted data
          raidz1-0                                      FAULTED  corrupted data
            16392332599436373384                        FAULTED  corrupted data
            gptid/6eb3cd05-807d-11e1-b905-002522e9cb33  ONLINE
            12415081290554517899                        UNAVAIL  cannot open
            gptid/6fb8055a-807d-11e1-b905-002522e9cb33  ONLINE


Code:
[root@freenas] ~# glabel status
                                      Name  Status  Components
                            ufs/FreeNASs3    N/A  da0s3
                            ufs/FreeNASs4    N/A  da0s4
                            ufs/FreeNASs1a    N/A  da0s1a
gptid/6fb8055a-807d-11e1-b905-002522e9cb33    N/A  ada0p2
gptid/6e3bd32a-807d-11e1-b905-002522e9cb33    N/A  ada1p2
gptid/6eb3cd05-807d-11e1-b905-002522e9cb33    N/A  ada2p2


Code:
[root@freenas] ~# gpart show
=>      63  15820737  da0  MBR  (7.6G)
        63  1930257    1  freebsd  [active]  (942M)
  1930320        63      - free -  (31k)
  1930383  1930257    2  freebsd  (942M)
  3860640      3024    3  freebsd  (1.5M)
  3863664    41328    4  freebsd  (20M)
  3904992  11915808      - free -  (5.7G)
 
=>      0  1930257  da0s1  BSD  (942M)
        0      16        - free -  (8.0k)
      16  1930241      1  !0  (942M)
 
=>        34  3907029101  ada0  GPT  (1.8T)
          34          94        - free -  (47k)
        128    4194304    1  freebsd-swap  (2.0G)
    4194432  3902834703    2  freebsd-zfs  (1.8T)
 
=>        34  3907029101  ada1  GPT  (1.8T)
          34          94        - free -  (47k)
        128    4194304    1  freebsd-swap  (2.0G)
    4194432  3902834703    2  freebsd-zfs  (1.8T)
 
=>        34  3907029101  ada2  GPT  (1.8T)
          34          94        - free -  (47k)
        128    4194304    1  freebsd-swap  (2.0G)
    4194432  3902834703    2  freebsd-zfs  (1.8T)


Code:
[root@freenas] ~# gpart list
Geom name: da0
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 15820799
first: 63
entries: 4
scheme: MBR
Providers:
1. Name: da0s1
  Mediasize: 988291584 (942M)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 32256
  Mode: r1w0e1
  attrib: active
  rawtype: 165
  length: 988291584
  offset: 32256
  type: freebsd
  index: 1
  end: 1930319
  start: 63
2. Name: da0s2
  Mediasize: 988291584 (942M)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 988356096
  Mode: r0w0e0
  rawtype: 165
  length: 988291584
  offset: 988356096
  type: freebsd
  index: 2
  end: 3860639
  start: 1930383
3. Name: da0s3
  Mediasize: 1548288 (1.5M)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 1976647680
  Mode: r0w0e0
  rawtype: 165
  length: 1548288
  offset: 1976647680
  type: freebsd
  index: 3
  end: 3863663
  start: 3860640
4. Name: da0s4
  Mediasize: 21159936 (20M)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 1978195968
  Mode: r1w1e2
  rawtype: 165
  length: 21159936
  offset: 1978195968
  type: freebsd
  index: 4
  end: 3904991
  start: 3863664
Consumers:
1. Name: da0
  Mediasize: 8100249600 (7.6G)
  Sectorsize: 512
  Mode: r2w1e4
 
Geom name: da0s1
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 1930256
first: 0
entries: 8
scheme: BSD
Providers:
1. Name: da0s1a
  Mediasize: 988283392 (942M)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 40448
  Mode: r1w0e1
  rawtype: 0
  length: 988283392
  offset: 8192
  type: !0
  index: 1
  end: 1930256
  start: 16
Consumers:
1. Name: da0s1
  Mediasize: 988291584 (942M)
  Sectorsize: 512
  Stripesize: 0
  Stripeoffset: 32256
  Mode: r1w0e1
 
Geom name: ada0
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 3907029134
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: ada0p1
  Mediasize: 2147483648 (2.0G)
  Sectorsize: 512
  Stripesize: 4096
  Stripeoffset: 0
  Mode: r1w1e1
  rawuuid: 6f9967c2-807d-11e1-b905-002522e9cb33
  rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
  label: (null)
  length: 2147483648
  offset: 65536
  type: freebsd-swap
  index: 1
  end: 4194431
  start: 128
2. Name: ada0p2
  Mediasize: 1998251367936 (1.8T)
  Sectorsize: 512
  Stripesize: 4096
  Stripeoffset: 0
  Mode: r0w0e0
  rawuuid: 6fb8055a-807d-11e1-b905-002522e9cb33
  rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
  label: (null)
  length: 1998251367936
  offset: 2147549184
  type: freebsd-zfs
  index: 2
  end: 3907029134
  start: 4194432
Consumers:
1. Name: ada0
  Mediasize: 2000398934016 (1.8T)
  Sectorsize: 512
  Stripesize: 4096
  Stripeoffset: 0
  Mode: r1w1e2
 
Geom name: ada1
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 3907029134
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: ada1p1
  Mediasize: 2147483648 (2.0G)
  Sectorsize: 512
  Stripesize: 4096
  Stripeoffset: 0
  Mode: r1w1e1
  rawuuid: 6e20ee58-807d-11e1-b905-002522e9cb33
  rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
  label: (null)
  length: 2147483648
  offset: 65536
  type: freebsd-swap
  index: 1
  end: 4194431
  start: 128
2. Name: ada1p2
  Mediasize: 1998251367936 (1.8T)
  Sectorsize: 512
  Stripesize: 4096
  Stripeoffset: 0
  Mode: r0w0e0
  rawuuid: 6e3bd32a-807d-11e1-b905-002522e9cb33
  rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
  label: (null)
  length: 1998251367936
  offset: 2147549184
  type: freebsd-zfs
  index: 2
  end: 3907029134
  start: 4194432
Consumers:
1. Name: ada1
  Mediasize: 2000398934016 (1.8T)
  Sectorsize: 512
  Stripesize: 4096
  Stripeoffset: 0
  Mode: r1w1e2
 
Geom name: ada2
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 3907029134
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: ada2p1
  Mediasize: 2147483648 (2.0G)
  Sectorsize: 512
  Stripesize: 4096
  Stripeoffset: 0
  Mode: r1w1e1
  rawuuid: 6e9bb2c8-807d-11e1-b905-002522e9cb33
  rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
  label: (null)
  length: 2147483648
  offset: 65536
  type: freebsd-swap
  index: 1
  end: 4194431
  start: 128
2. Name: ada2p2
  Mediasize: 1998251367936 (1.8T)
  Sectorsize: 512
  Stripesize: 4096
  Stripeoffset: 0
  Mode: r0w0e0
  rawuuid: 6eb3cd05-807d-11e1-b905-002522e9cb33
  rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
  label: (null)
  length: 1998251367936
  offset: 2147549184
  type: freebsd-zfs
  index: 2
  end: 3907029134
  start: 4194432
Consumers:
1. Name: ada2
  Mediasize: 2000398934016 (1.8T)
  Sectorsize: 512
  Stripesize: 4096
  Stripeoffset: 0
  Mode: r1w1e2


Code:
[root@freenas] ~# camcontrol devlist
<ST2000DM001-9YN164 CC4C>          at scbus1 target 0 lun 0 (ada0,pass0)
<ST2000DM001-9YN164 CC4C>          at scbus2 target 0 lun 0 (ada1,pass1)
<WDC WD20EARS-00S8B1 80.00A80>    at scbus3 target 0 lun 0 (ada2,pass2)
<JetFlash Transcend 8GB 1100>      at scbus6 target 0 lun 0 (pass3,da0)

Code:
[root@freenas] ~# zpool import -fF main_vol
cannot import 'main_vol': I/O error
        Destroy and re-create the pool from
        a backup source.


i try
#zpool import -fFX main_vol
and it seems than it's doing something (zpool in top show CPU usage), but i wait for 7 days and nothing changes (zpool consumes CPU and command looks like in progess)

Looks like one of hdd have some logical problems, is it possible to recover pool in this situation?
 

ezh

Cadet
Joined
Sep 1, 2011
Messages
7
12415081290554517899 UNAVAIL cannot open
is
gptid/6e3bd32a-807d-11e1-b905-002522e9cb33 N/A ada1p2

but
Code:
[root@freenas] ~# zdb -l /dev/ada1p2
--------------------------------------------
LABEL 0
--------------------------------------------
    version: 28
    name: 'main_vol'
    state: 0
    txg: 4471294
    pool_guid: 10036867675565415040
    hostid: 4266313884
    hostname: 'freenas.local'
    top_guid: 9046395810182524446
    guid: 16392332599436373384
    vdev_children: 1
    vdev_tree:
        type: 'raidz'
        id: 0
        guid: 9046395810182524446
        nparity: 1
        metaslab_array: 23
        metaslab_shift: 36
        ashift: 12
        asize: 7992986566656
        is_log: 0
        children[0]:
            type: 'disk'
            id: 0
            guid: 16392332599436373384
            path: '/dev/gptid/6e3bd32a-807d-11e1-b905-002522e9cb33'
            phys_path: '/dev/gptid/6e3bd32a-807d-11e1-b905-002522e9cb33'
            whole_disk: 0
            DTL: 99
        children[1]:
            type: 'disk'
            id: 1
            guid: 11152698004673853093
            path: '/dev/gptid/6eb3cd05-807d-11e1-b905-002522e9cb33'
            phys_path: '/dev/gptid/6eb3cd05-807d-11e1-b905-002522e9cb33'
            whole_disk: 0
            DTL: 98
        children[2]:
            type: 'disk'
            id: 2
            guid: 12415081290554517899
            path: '/dev/gptid/6f2b5d71-807d-11e1-b905-002522e9cb33'
            phys_path: '/dev/gptid/6f2b5d71-807d-11e1-b905-002522e9cb33'
            whole_disk: 0
            DTL: 97
        children[3]:
            type: 'disk'
            id: 3
            guid: 16739777816771128006
            path: '/dev/gptid/6fb8055a-807d-11e1-b905-002522e9cb33'
            phys_path: '/dev/gptid/6fb8055a-807d-11e1-b905-002522e9cb33'
            whole_disk: 0
            DTL: 96
--------------------------------------------
LABEL 1
--------------------------------------------
    version: 28
    name: 'main_vol'
    state: 0
    txg: 4175512
    pool_guid: 10036867675565415040
    hostid: 4266313884
    hostname: ''
    top_guid: 9046395810182524446
    guid: 16392332599436373384
    vdev_children: 1
    vdev_tree:
        type: 'raidz'
        id: 0
        guid: 9046395810182524446
        nparity: 1
        metaslab_array: 23
        metaslab_shift: 36
        ashift: 12
        asize: 7992986566656
        is_log: 0
        children[0]:
            type: 'disk'
            id: 0
            guid: 16392332599436373384
            path: '/dev/gptid/6e3bd32a-807d-11e1-b905-002522e9cb33'
            phys_path: '/dev/gptid/6e3bd32a-807d-11e1-b905-002522e9cb33'
            whole_disk: 0
            DTL: 99
        children[1]:
            type: 'disk'
            id: 1
            guid: 11152698004673853093
            path: '/dev/gptid/6eb3cd05-807d-11e1-b905-002522e9cb33'
            phys_path: '/dev/gptid/6eb3cd05-807d-11e1-b905-002522e9cb33'
            whole_disk: 0
            DTL: 98
        children[2]:
            type: 'disk'
            id: 2
            guid: 12415081290554517899
            path: '/dev/gptid/6f2b5d71-807d-11e1-b905-002522e9cb33'
            phys_path: '/dev/gptid/6f2b5d71-807d-11e1-b905-002522e9cb33'
            whole_disk: 0
            DTL: 97
        children[3]:
            type: 'disk'
            id: 3
            guid: 16739777816771128006
            path: '/dev/gptid/6fb8055a-807d-11e1-b905-002522e9cb33'
            phys_path: '/dev/gptid/6fb8055a-807d-11e1-b905-002522e9cb33'
            whole_disk: 0
            DTL: 96
--------------------------------------------
LABEL 2
--------------------------------------------
    version: 28
    name: 'main_vol'
    state: 0
    txg: 4175512
    pool_guid: 10036867675565415040
    hostid: 4266313884
    hostname: ''
    top_guid: 9046395810182524446
    guid: 16392332599436373384
    vdev_children: 1
    vdev_tree:
        type: 'raidz'
        id: 0
        guid: 9046395810182524446
        nparity: 1
        metaslab_array: 23
        metaslab_shift: 36
        ashift: 12
        asize: 7992986566656
        is_log: 0
        children[0]:
            type: 'disk'
            id: 0
            guid: 16392332599436373384
            path: '/dev/gptid/6e3bd32a-807d-11e1-b905-002522e9cb33'
            phys_path: '/dev/gptid/6e3bd32a-807d-11e1-b905-002522e9cb33'
            whole_disk: 0
            DTL: 99
        children[1]:
            type: 'disk'
            id: 1
            guid: 11152698004673853093
            path: '/dev/gptid/6eb3cd05-807d-11e1-b905-002522e9cb33'
            phys_path: '/dev/gptid/6eb3cd05-807d-11e1-b905-002522e9cb33'
            whole_disk: 0
            DTL: 98
        children[2]:
            type: 'disk'
            id: 2
            guid: 12415081290554517899
            path: '/dev/gptid/6f2b5d71-807d-11e1-b905-002522e9cb33'
            phys_path: '/dev/gptid/6f2b5d71-807d-11e1-b905-002522e9cb33'
            whole_disk: 0
            DTL: 97
        children[3]:
            type: 'disk'
            id: 3
            guid: 16739777816771128006
            path: '/dev/gptid/6fb8055a-807d-11e1-b905-002522e9cb33'
            phys_path: '/dev/gptid/6fb8055a-807d-11e1-b905-002522e9cb33'
            whole_disk: 0
            DTL: 96
--------------------------------------------
LABEL 3
--------------------------------------------
    version: 28
    name: 'main_vol'
    state: 0
    txg: 4175512
    pool_guid: 10036867675565415040
    hostid: 4266313884
    hostname: ''
    top_guid: 9046395810182524446
    guid: 16392332599436373384
    vdev_children: 1
    vdev_tree:
        type: 'raidz'
        id: 0
        guid: 9046395810182524446
        nparity: 1
        metaslab_array: 23
        metaslab_shift: 36
        ashift: 12
        asize: 7992986566656
        is_log: 0
        children[0]:
            type: 'disk'
            id: 0
            guid: 16392332599436373384
            path: '/dev/gptid/6e3bd32a-807d-11e1-b905-002522e9cb33'
            phys_path: '/dev/gptid/6e3bd32a-807d-11e1-b905-002522e9cb33'
            whole_disk: 0
            DTL: 99
        children[1]:
            type: 'disk'
            id: 1
            guid: 11152698004673853093
            path: '/dev/gptid/6eb3cd05-807d-11e1-b905-002522e9cb33'
            phys_path: '/dev/gptid/6eb3cd05-807d-11e1-b905-002522e9cb33'
            whole_disk: 0
            DTL: 98
        children[2]:
            type: 'disk'
            id: 2
            guid: 12415081290554517899
            path: '/dev/gptid/6f2b5d71-807d-11e1-b905-002522e9cb33'
            phys_path: '/dev/gptid/6f2b5d71-807d-11e1-b905-002522e9cb33'
            whole_disk: 0
            DTL: 97
        children[3]:
            type: 'disk'
            id: 3
            guid: 16739777816771128006
            path: '/dev/gptid/6fb8055a-807d-11e1-b905-002522e9cb33'
            phys_path: '/dev/gptid/6fb8055a-807d-11e1-b905-002522e9cb33'
            whole_disk: 0
            DTL: 96
[root@freenas] ~#
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Yeah, your data is pretty much toast. See that link in my sig about RAID5 being dead? Click on it and read it. Then after you're read it and beat yourself up over the poor decision, go ahead and rebuild your pool as a RAIDZ2.
 

ezh

Cadet
Joined
Sep 1, 2011
Messages
7
yeah, sadly but true.

i recreate pool :)

Code:
[root@freenas] ~# zpool status
  pool: main_vol
state: ONLINE
  scan: none requested
config:
 
        NAME                                            STATE    READ WRITE CKSUM
        main_vol                                        ONLINE      0    0    0
          raidz2-0                                      ONLINE      0    0    0
            gptid/61ba39e8-1238-11e3-9e99-002522e9cb33  ONLINE      0    0    0
            gptid/62a2484f-1238-11e3-9e99-002522e9cb33  ONLINE      0    0    0
            gptid/6394d393-1238-11e3-9e99-002522e9cb33  ONLINE      0    0    0
            gptid/642d16da-1238-11e3-9e99-002522e9cb33  ONLINE      0    0    0
 
errors: No known data errors
 
Status
Not open for further replies.
Top