Zpool import issues

rlucas

Cadet
Joined
May 25, 2022
Messages
5
I see this come up a lot and apologize in advance.

For starters:
HP ProLiant DL360p Gen8
2x e5-240 Xeon 2.5g CPU.
64g Ram
HP P4201 Raid Controller.

Un fortunately I have to set up all the drives as raid-0. The controller does not allow for direct control.

Set up as follows:
drive 1 279g sas = operating system.
drive 2 300g sas = IOcage
drive 3-5 2t Solid State = zraid (raid5)
drive 6 2t = spare

I had 1 drive in my storage zpool what seemed to be intermittently failing. When the drive became unresponsive the whole system would crash. After performing a re boot everything would come back fine. I shut the unit down to wait for a new drive to use for a recovery (drive 6). When I rebooted the machine the Internal raid controller forgot all the drives and I had to re initialize them. At that time my zpools became unavailable.

I tried to remove the non working zpool and re import it. I was able to see the pool in the GUI under import existing pool.
It will sit there and think for a moment then come back with "one or more devices is currently unavailable"

Code:
root@truenas[~]# gpart status
 Name  Status  Components
da0p1      OK  da0
da0p2      OK  da0
da0p3      OK  da0
da2p1      OK  da2
da2p2      OK  da2
da3p1      OK  da3
da3p2      OK  da3
da4p1      OK  da4
da4p2      OK  da4
da5p1      OK  da5
da5p2      OK  da5


Code:
root@truenas[~]# gpart show
=>       40  585871888  da0  GPT  (279G)
         40       1024    1  freebsd-boot  (512K)
       1064   33554432    3  freebsd-swap  (16G)
   33555496  552304640    2  freebsd-zfs  (263G)
  585860136      11792       - free -  (5.8M)

=>        40  3906963552  da2  GPT  (1.8T)
          40          88       - free -  (44K)
         128     4194304    1  freebsd-swap  (2.0G)
     4194432  3902769160    2  freebsd-zfs  (1.8T)

=>        40  3906963552  da3  GPT  (1.8T)
          40          88       - free -  (44K)
         128     4194304    1  freebsd-swap  (2.0G)
     4194432  3902769160    2  freebsd-zfs  (1.8T)

=>        40  3906963552  da4  GPT  (1.8T)
          40          88       - free -  (44K)
         128     4194304    1  freebsd-swap  (2.0G)
     4194432  3902769160    2  freebsd-zfs  (1.8T)

=>        40  3906963552  da5  GPT  (1.8T)
          40          88       - free -  (44K)
         128     4194304    1  freebsd-swap  (2.0G)
     4194432  3902769160    2  freebsd-zfs  (1.8T)




Code:
root@truenas[~]# zpool import
   pool: Raid2
     id: 18028162585485180263
  state: UNAVAIL
status: One or more devices are missing from the system.
 action: The pool cannot be imported. Attach the missing
        devices and try again.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-3C
 config:

        Raid2                                           UNAVAIL  insufficient replicas
          raidz1-0                                      UNAVAIL  insufficient replicas
            gptid/e6b6d0ea-1a76-11ec-86ce-d89d671955f4  UNAVAIL  cannot open
            gptid/e6a909a1-1a76-11ec-86ce-d89d671955f4  UNAVAIL  cannot open
            gptid/e6c09c25-1a76-11ec-86ce-d89d671955f4  ONLINE

   pool: Raid2
     id: 11782628706112584700
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

        Raid2                                           ONLINE
          raidz1-0                                      ONLINE
            gptid/3064968a-2443-11ec-beeb-d89d671955f4  ONLINE
            gptid/307c3ca6-2443-11ec-beeb-d89d671955f4  ONLINE
            gptid/3081b9d7-2443-11ec-beeb-d89d671955f4  ONLINE


What is weird is there is only one raid2 ever made.

When is try to import the one with drives that are online I get this response:

Code:
root@truenas[~]# zpool import 11782628706112584700
cannot import 'Raid2': one or more devices is currently unavailable


I've tried it with the force command with the same outcome.

After all this I booted up a fresh install with a different main OS HD and tried to import with the same responses.

How screwed am I. I do have a back up but it is at least 6 months old and there has been quite lot of additions. Shame on my for not backing up more.

Thanks for any guidance in advance.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
My guess would be that the 2 pools with the same name both sitting there not imported is something to do with the issue, as I can see that you're trying to import a pool which has all the disks online.

if you can remove the remaining (I guess the one you call "spare") disk and try again when the desired pool is the only one shown by zpool import.
 

rlucas

Cadet
Joined
May 25, 2022
Messages
5
Thank you for responding.

I tried what you suggested.
I removed the spare drive.

Still no luck. The second same name pool did go away. When I performed the import, it still says there is something missing. The three drive that populate are the 3 original drives in the zpool.

Code:
root@truenas[~]# gpart status
 Name  Status  Components
da0p1      OK  da0
da0p2      OK  da0
da0p3      OK  da0
da2p1      OK  da2
da2p2      OK  da2
da3p1      OK  da3
da3p2      OK  da3
da4p1      OK  da4
da4p2      OK  da4
da1p1      OK  da1
da1p2      OK  da1


Code:
root@truenas[~]# gpart show
=>       40  585871888  da0  GPT  (279G)
         40       1024    1  freebsd-boot  (512K)
       1064   33554432    3  freebsd-swap  (16G)
   33555496  552304640    2  freebsd-zfs  (263G)
  585860136      11792       - free -  (5.8M)

=>        40  3906963552  da2  GPT  (1.8T)
          40          88       - free -  (44K)
         128     4194304    1  freebsd-swap  (2.0G)
     4194432  3902769160    2  freebsd-zfs  (1.8T)

=>        40  3906963552  da3  GPT  (1.8T)
          40          88       - free -  (44K)
         128     4194304    1  freebsd-swap  (2.0G)
     4194432  3902769160    2  freebsd-zfs  (1.8T)

=>        40  3906963552  da4  GPT  (1.8T)
          40          88       - free -  (44K)
         128     4194304    1  freebsd-swap  (2.0G)
     4194432  3902769160    2  freebsd-zfs  (1.8T)

=>       40  585871888  da1  GPT  (279G)
         40         88       - free -  (44K)
        128    4194304    1  freebsd-swap  (2.0G)
    4194432  581677496    2  freebsd-zfs  (277G)



Code:
root@truenas[~]# zpool import
   pool: Raid2
     id: 11782628706112584700
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

        Raid2                                           ONLINE
          raidz1-0                                      ONLINE
            gptid/3064968a-2443-11ec-beeb-d89d671955f4  ONLINE
            gptid/307c3ca6-2443-11ec-beeb-d89d671955f4  ONLINE
            gptid/3081b9d7-2443-11ec-beeb-d89d671955f4  ONLINE


Code:
root@truenas[~]# zpool import Raid2
cannot import 'Raid2': one or more devices is currently unavailable


I'm not to up and up on FreeBSD, I do know a little more about Debian based systems.
Are there any other checks I can perform. I remember seeing some other things other people were doing but cant find those posts any more.

Thanks
 

rlucas

Cadet
Joined
May 25, 2022
Messages
5
Just saw this in some one else post that seams to have similar issue with similar setup.
I performed the same commands they did with the same results. Here are my results.

Code:
 zpool import -F Raid2
internal error: cannot import 'Raid2': Integrity check failed
zsh: abort (core dumped)  zpool import -F Raid2



Code:
gpart list
Geom name: da0
modified: false
state: OK
fwheads: 255
fwsectors: 32
last: 585871927
first: 40
entries: 128
scheme: GPT
Providers:
1. Name: da0p1
   Mediasize: 524288 (512K)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 20480
   Mode: r0w0e0
   efimedia: HD(1,GPT,707a4fd7-7db8-11e9-8a76-d89d671955f4,0x28,0x400)
   rawuuid: 707a4fd7-7db8-11e9-8a76-d89d671955f4
   rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f
   label: (null)
   length: 524288
   offset: 20480
   type: freebsd-boot
   index: 1
   end: 1063
   start: 40
2. Name: da0p2
   Mediasize: 282779975680 (263G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 544768
   Mode: r1w1e1
   efimedia: HD(2,GPT,70919bc6-7db8-11e9-8a76-d89d671955f4,0x2000428,0x20eb8000)
   rawuuid: 70919bc6-7db8-11e9-8a76-d89d671955f4
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 282779975680
   offset: 17180413952
   type: freebsd-zfs
   index: 2
   end: 585860135
   start: 33555496
3. Name: da0p3
   Mediasize: 17179869184 (16G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 544768
   Mode: r1w1e1
   efimedia: HD(3,GPT,70863966-7db8-11e9-8a76-d89d671955f4,0x428,0x2000000)
   rawuuid: 70863966-7db8-11e9-8a76-d89d671955f4
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 17179869184
   offset: 544768
   type: freebsd-swap
   index: 3
   end: 33555495
   start: 1064
Consumers:
1. Name: da0
   Mediasize: 299966445568 (279G)
   Sectorsize: 512
   Mode: r2w2e4

Geom name: da2
modified: false
state: OK
fwheads: 255
fwsectors: 32
last: 3906963591
first: 40
entries: 128
scheme: GPT
Providers:
1. Name: da2p1
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 65536
   Mode: r0w0e0
   efimedia: HD(1,GPT,30575203-2443-11ec-beeb-d89d671955f4,0x80,0x400000)
   rawuuid: 30575203-2443-11ec-beeb-d89d671955f4
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2147483648
   offset: 65536
   type: freebsd-swap
   index: 1
   end: 4194431
   start: 128
2. Name: da2p2
   Mediasize: 1998217809920 (1.8T)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 2147549184
   Mode: r0w0e0
   efimedia: HD(2,GPT,3064968a-2443-11ec-beeb-d89d671955f4,0x400080,0xe89f8808)
   rawuuid: 3064968a-2443-11ec-beeb-d89d671955f4
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 1998217809920
   offset: 2147549184
   type: freebsd-zfs
   index: 2
   end: 3906963591
   start: 4194432
Consumers:
1. Name: da2
   Mediasize: 2000365379584 (1.8T)
   Sectorsize: 512
   Mode: r0w0e0

Geom name: da3
modified: false
state: OK
fwheads: 255
fwsectors: 32
last: 3906963591
first: 40
entries: 128
scheme: GPT
Providers:
1. Name: da3p1
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 65536
   Mode: r0w0e0
   efimedia: HD(1,GPT,3075c66b-2443-11ec-beeb-d89d671955f4,0x80,0x400000)
   rawuuid: 3075c66b-2443-11ec-beeb-d89d671955f4
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2147483648
   offset: 65536
   type: freebsd-swap
   index: 1
   end: 4194431
   start: 128
2. Name: da3p2
   Mediasize: 1998217809920 (1.8T)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 2147549184
   Mode: r0w0e0
   efimedia: HD(2,GPT,3081b9d7-2443-11ec-beeb-d89d671955f4,0x400080,0xe89f8808)
   rawuuid: 3081b9d7-2443-11ec-beeb-d89d671955f4
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 1998217809920
   offset: 2147549184
   type: freebsd-zfs
   index: 2
   end: 3906963591
   start: 4194432
Consumers:
1. Name: da3
   Mediasize: 2000365379584 (1.8T)
   Sectorsize: 512
   Mode: r0w0e0

Geom name: da4
modified: false
state: OK
fwheads: 255
fwsectors: 32
last: 3906963591
first: 40
entries: 128
scheme: GPT
Providers:
1. Name: da4p1
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 65536
   Mode: r0w0e0
   efimedia: HD(1,GPT,306d0d72-2443-11ec-beeb-d89d671955f4,0x80,0x400000)
   rawuuid: 306d0d72-2443-11ec-beeb-d89d671955f4
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2147483648
   offset: 65536
   type: freebsd-swap
   index: 1
   end: 4194431
   start: 128
2. Name: da4p2
   Mediasize: 1998217809920 (1.8T)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 2147549184
   Mode: r0w0e0
   efimedia: HD(2,GPT,307c3ca6-2443-11ec-beeb-d89d671955f4,0x400080,0xe89f8808)
   rawuuid: 307c3ca6-2443-11ec-beeb-d89d671955f4
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 1998217809920
   offset: 2147549184
   type: freebsd-zfs
   index: 2
   end: 3906963591
   start: 4194432
Consumers:
1. Name: da4
   Mediasize: 2000365379584 (1.8T)
   Sectorsize: 512
   Mode: r0w0e0

Geom name: da1
modified: false
state: OK
fwheads: 255
fwsectors: 32
last: 585871927
first: 40
entries: 128
scheme: GPT
Providers:
1. Name: da1p1
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 65536
   Mode: r0w0e0
   efimedia: HD(1,GPT,86105ab4-dc8d-11ec-a623-d89d671955f4,0x80,0x400000)
   rawuuid: 86105ab4-dc8d-11ec-a623-d89d671955f4
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2147483648
   offset: 65536
   type: freebsd-swap
   index: 1
   end: 4194431
   start: 128
2. Name: da1p2
   Mediasize: 297818877952 (277G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 2147549184
   Mode: r1w1e2
   efimedia: HD(2,GPT,86229878-dc8d-11ec-a623-d89d671955f4,0x400080,0x22abb1b8)
   rawuuid: 86229878-dc8d-11ec-a623-d89d671955f4
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 297818877952
   offset: 2147549184
   type: freebsd-zfs
   index: 2
   end: 585871927
   start: 4194432
Consumers:
1. Name: da1
   Mediasize: 299966445568 (279G)
   Sectorsize: 512
   Mode: r1w1e3


Code:
root@truenas[~]# zdb -l /dev/da2p2
failed to unpack label 0
failed to unpack label 1
------------------------------------
LABEL 2
------------------------------------
    version: 5000
    name: 'Raid2'
    state: 0
    txg: 3832054
    pool_guid: 11782628706112584700
    errata: 0
    hostid: 2936774556
    hostname: 'freenas.local'
    top_guid: 2276245220513528421
    guid: 17426277804674015455
    vdev_children: 1
    vdev_tree:
        type: 'raidz'
        id: 0
        guid: 2276245220513528421
        nparity: 1
        metaslab_array: 256
        metaslab_shift: 34
        ashift: 12
        asize: 5994639261696
        is_log: 0
        create_txg: 4
        children[0]:
            type: 'disk'
            id: 0
            guid: 17426277804674015455
            path: '/dev/gptid/3064968a-2443-11ec-beeb-d89d671955f4'
            DTL: 2731
            create_txg: 4
            expansion_time: 1652910598
        children[1]:
            type: 'disk'
            id: 1
            guid: 6610088699893809357
            path: '/dev/gptid/307c3ca6-2443-11ec-beeb-d89d671955f4'
            DTL: 2730
            create_txg: 4
            expansion_time: 1652823437
        children[2]:
            type: 'disk'
            id: 2
            guid: 17776907107414487279
            path: '/dev/gptid/3081b9d7-2443-11ec-beeb-d89d671955f4'
            DTL: 2729
            create_txg: 4
            expansion_time: 1652823133
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
    labels = 2 3
root@truenas[~]# zdb -l /dev/da3p2
failed to unpack label 0
failed to unpack label 1
------------------------------------
LABEL 2
------------------------------------
    version: 5000
    name: 'Raid2'
    state: 0
    txg: 3832054
    pool_guid: 11782628706112584700
    errata: 0
    hostid: 2936774556
    hostname: 'freenas.local'
    top_guid: 2276245220513528421
    guid: 17776907107414487279
    vdev_children: 1
    vdev_tree:
        type: 'raidz'
        id: 0
        guid: 2276245220513528421
        nparity: 1
        metaslab_array: 256
        metaslab_shift: 34
        ashift: 12
        asize: 5994639261696
        is_log: 0
        create_txg: 4
        children[0]:
            type: 'disk'
            id: 0
            guid: 17426277804674015455
            path: '/dev/gptid/3064968a-2443-11ec-beeb-d89d671955f4'
            DTL: 2731
            create_txg: 4
            expansion_time: 1652910598
        children[1]:
            type: 'disk'
            id: 1
            guid: 6610088699893809357
            path: '/dev/gptid/307c3ca6-2443-11ec-beeb-d89d671955f4'
            DTL: 2730
            create_txg: 4
            expansion_time: 1652823437
        children[2]:
            type: 'disk'
            id: 2
            guid: 17776907107414487279
            path: '/dev/gptid/3081b9d7-2443-11ec-beeb-d89d671955f4'
            DTL: 2729
            create_txg: 4
            expansion_time: 1652823133
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
    labels = 2 3
root@truenas[~]# zdb -l /dev/da4p2
failed to unpack label 0
failed to unpack label 1
------------------------------------
LABEL 2
------------------------------------
    version: 5000
    name: 'Raid2'
    state: 0
    txg: 3832054
    pool_guid: 11782628706112584700
    errata: 0
    hostid: 2936774556
    hostname: 'freenas.local'
    top_guid: 2276245220513528421
    guid: 6610088699893809357
    vdev_children: 1
    vdev_tree:
        type: 'raidz'
        id: 0
        guid: 2276245220513528421
        nparity: 1
        metaslab_array: 256
        metaslab_shift: 34
        ashift: 12
        asize: 5994639261696
        is_log: 0
        create_txg: 4
        children[0]:
            type: 'disk'
            id: 0
            guid: 17426277804674015455
            path: '/dev/gptid/3064968a-2443-11ec-beeb-d89d671955f4'
            DTL: 2731
            create_txg: 4
            expansion_time: 1652910598
        children[1]:
            type: 'disk'
            id: 1
            guid: 6610088699893809357
            path: '/dev/gptid/307c3ca6-2443-11ec-beeb-d89d671955f4'
            DTL: 2730
            create_txg: 4
            expansion_time: 1652823437
        children[2]:
            type: 'disk'
            id: 2
            guid: 17776907107414487279
            path: '/dev/gptid/3081b9d7-2443-11ec-beeb-d89d671955f4'
            DTL: 2729
            create_txg: 4
            expansion_time: 1652823133
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
    labels = 2 3
 

rlucas

Cadet
Joined
May 25, 2022
Messages
5
So looking in to my pools, it looks like there are labels that fail to unpack. Each drive of the pool label 0 and 1 fail to unpack but label 3 and 4 seam to be just fine. Is there a way to correct this issue or am I completely screwed.

Thanks.
 
Top