ZFS Pool Failure

Status
Not open for further replies.

Xylex

Dabbler
Joined
Jan 30, 2013
Messages
11
I have been using ZFS for quite some time now. I understand the simple zpool commands and how to scrub and manually mount/import a pool.

This post is about my 2nd NAS I have built.
Code:
Hardware Specs:
Board: Supermicro X7DWU
BIOS: 7DWUB048
RAM: x8 CT51272AF667 - 4GB FBDDR2 667 = 32GB (Yes, 32GBs of Fully Buffered RAM)
ada0: WD2500JD-00HBB0 250GB Sata - BOOT DRIVE
ada1-3: ST31000524A2
ada4: WD10EALX



I am going to start from the beginning.
I have an ESX server that hosts many environments.
A few of them operate from this NAS.
I was recently having memory errors on my ESX server, so I pulled it and the NAS, as the NAS was on top of the ESX server.
I performed my memory tests, and replaced the bad DIMMS, and promptly re-racked both servers. I made NO CHANGES to the NAS.

Upon powering the NAS up, If find that my zfs pool "zfs" couldn't remount.

This is what I get when I try to manually remount it:
Code:
   pool: zfs
     id: 331113494358268866
  state: UNAVAIL
 status: One or more devices are missing from the system.
 action: The pool cannot be imported. Attach the missing
        devices and try again.
   see: http://www.sun.com/msg/ZFS-8000-3C
 config:

        zfs                      UNAVAIL  insufficient replicas
          raidz1-0               UNAVAIL  insufficient replicas
            3991037278626503309  UNAVAIL  cannot open
            ada4                 ONLINE
            69528833404146203    UNAVAIL  cannot open
            ada3                 ONLINE




Side note: This is the first time I have gotten ada3 and ada4 to both come online. This is simple just trial and error of moving drives around.
Strangely the drives I am having issues with are only the Seagates. (Until now of course)
I cannot loose this data, any sugestions?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
Well, from experience with someone else that had bad RAM on your FreeNAS server you have roughly 100% chance of no recovery.

Can you provide more info on your ESX VM? How much RAM did it have. What method did you use to make the drives available to the VM?

Do NOT do ANYTHING with those drives until you hear from one of the senior guys that says you are screwed. There's been a few people that thought they new better and started doing stuff they thought was safe and actually lost their data because they didn't listen to this tip. Let me say this again... DO NOT CHANGE ANYTHING ON YOUR VM, HARDWARE, ETC UNTIL A SENIOR POSTER SAYS TO.
 

William Grzybowski

Wizard
iXsystems
Joined
May 27, 2011
Messages
1,754
I don't believe bad RAM would cause any of this.

ZFS simply cannot find any disks with the zfs labels on them. Are you sure the drive order have not changed somehow and the wrong disks are attached to the FreeNAS VM?

Another thing I noticed, you have full disks with ZFS, which is not the recommended way, did you manually create that pool?

Paste:

# gpart show
# glabel status
# sysctl kern.disks
 

Xylex

Dabbler
Joined
Jan 30, 2013
Messages
11
Forgive me, think I need to clarify.
I had Bad RAM on the ESX box, NOT the NAS. The specs I posted are the NAS.

To answer William's question.
I made the Volume in the WebUI Volume manager, then made shares and started storing data.
Also, I came back in this morning to find that ada2-4 are marked unavailable, and ada1 is online.



Code:
[root@freenas] ~# gpart show
=>       63  488397105  ada0  MBR  (232G)
         63    1930257     1  freebsd  [active]  (942M)
    1930320         63        - free -  (31k)
    1930383    1930257     2  freebsd  (942M)
    3860640       3024     3  freebsd  (1.5M)
    3863664      41328     4  freebsd  (20M)
    3904992  484492176        - free -  (231G)

=>        34  1953525101  ada2  GPT  (931G) [CORRUPT]
          34          94        - free -  (47k)
         128     4194304     1  freebsd-swap  (2.0G)
     4194432  1949330703     2  freebsd-zfs  (929G)

=>      0  1930257  ada0s1  BSD  (942M)
        0       16          - free -  (8.0k)
       16  1930241       1  !0  (942M)

=>        34  1953525101  ada3  GPT  (931G) [CORRUPT]
          34          94        - free -  (47k)
         128     4194304     1  freebsd-swap  (2.0G)
     4194432  1949330703     2  freebsd-zfs  (929G)

=>        34  1953525101  ada4  GPT  (931G) [CORRUPT]
          34          94        - free -  (47k)
         128     4194304     1  freebsd-swap  (2.0G)
     4194432  1949330703     2  freebsd-zfs  (929G)


glable status

Code:
[root@freenas] ~# glabel status
                                      Name  Status  Components
                             ufs/FreeNASs3     N/A  ada0s3
                             ufs/FreeNASs4     N/A  ada0s4
gptid/0df5e718-5a11-11e2-9a01-0025902238e8     N/A  ada2p2
                            ufs/FreeNASs1a     N/A  ada0s1a
gptid/0d45ad62-5a11-11e2-9a01-0025902238e8     N/A  ada3p2
gptid/0da39302-5a11-11e2-9a01-0025902238e8     N/A  ada4p2


sysctl kern.disks

Code:
[root@freenas] ~# sysctl kern.disks
kern.disks: ada4 ada3 ada2 ada1 ada0
 

Xylex

Dabbler
Joined
Jan 30, 2013
Messages
11
I don't believe bad RAM would cause any of this.

ZFS simply cannot find any disks with the zfs labels on them. Are you sure the drive order have not changed somehow and the wrong disks are attached to the FreeNAS VM?

Another thing I noticed, you have full disks with ZFS, which is not the recommended way, did you manually create that pool?

Paste:

# gpart show
# glabel status
# sysctl kern.disks

I have posted the command outputs for you.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
Just out of an idea, can you post the model number and firmware version listed on the label of your Seagate drives.

Right now I'm kind of thinking this may be a situation where you didn't do anything to the server, but you technically did. You moved it when you pulled it, and there may have been some loose connection that has severed since it has moved. Also, from extensive experience with electronics I can tell you that the most common time stuff breaks is when you turn it off and turn it on. Of course, generally if it breaks when it shuts off you won't know until you turn it on and it doesn't work. It's possible that you have drives that are intermittently failing and now that you've turned them off and on you're finding out they're not in the best of condition.

As another option, you could take the FreeNAS installation disk and 4 disks and try them in another machine. As long as the SATA controller is compatible FreeNAS should just boot right up after you chose the USB stick as the boot device. Generally onboard Intel controllers work perfectly. Since you only have 4 drives most motherboards have at least 4 ports on an Intel controller.

Big picture, there's really not alot of information to go on and it sounds like it will simply take some plain old troubleshooting by you to determine the actual problem. Just be careful what commands you start running from the CLI. You definitely don't want to inadvertently do something that writes anything to the drives. I think that if you play around and experiment you'll figure out what is wrong and be able to get to your data. Depending on the failure mode you may want to consider pricing out new drives in the very near future.
 

Xylex

Dabbler
Joined
Jan 30, 2013
Messages
11
Just out of an idea, can you post the model number and firmware version listed on the label of your Seagate drives.

Right now I'm kind of thinking this may be a situation where you didn't do anything to the server, but you technically did. You moved it when you pulled it, and there may have been some loose connection that has severed since it has moved. Also, from extensive experience with electronics I can tell you that the most common time stuff breaks is when you turn it off and turn it on. Of course, generally if it breaks when it shuts off you won't know until you turn it on and it doesn't work. It's possible that you have drives that are intermittently failing and now that you've turned them off and on you're finding out they're not in the best of condition.

As another option, you could take the FreeNAS installation disk and 4 disks and try them in another machine. As long as the SATA controller is compatible FreeNAS should just boot right up after you chose the USB stick as the boot device. Generally onboard Intel controllers work perfectly. Since you only have 4 drives most motherboards have at least 4 ports on an Intel controller.

Big picture, there's really not alot of information to go on and it sounds like it will simply take some plain old troubleshooting by you to determine the actual problem. Just be careful what commands you start running from the CLI. You definitely don't want to inadvertently do something that writes anything to the drives. I think that if you play around and experiment you'll figure out what is wrong and be able to get to your data. Depending on the failure mode you may want to consider pricing out new drives in the very near future.



To be honest with you, I already have done this.
The drives that are failing are my 3 Seagates. They started doing this at the same time, the drives them selves pass SMART, they pass 2 different's stress tests. The drives themselves are fine.

I already put the drives in a completely different machine to get the same issue.
I dont post on forums unless I feel that I have done everything I could.
 

paleoN

Wizard
Joined
Apr 22, 2012
Messages
1,402
I have posted the command outputs for you.
What's the rest of the story? FreeNAS 8 GUI doesn't use whole disks or did you manually type that first post?

Did ada1 ever have any partitions on it?

I already put the drives in a completely different machine to get the same issue.
What machine were they in when you ran the commands above?

Save your current config, restore factory defaults (or install latest release to a new usb stick), do not restore the config and rerun the following:
Code:
zpool import

camcontrol devlist

glabel status

gpart show
 

Xylex

Dabbler
Joined
Jan 30, 2013
Messages
11
What's the rest of the story? FreeNAS 8 GUI doesn't use whole disks or did you manually type that first post?

The drivers were blank when I made the volume, and I did it all in the GUI.
This is FreeNAS 8.3 and I simple created the volume, then created shares.
I can get you an output Monday, as the server at my Datacenter.
 

Xylex

Dabbler
Joined
Jan 30, 2013
Messages
11
Did ada1 ever have any partitions on it?

No, it did not.

What machine were they in when you ran the commands above?

The Specs listed above. The Original server.

Save your current config, restore factory defaults (or install latest release to a new usb stick), do not restore the config and rerun the following:
Code:
zpool import
camcontrol devlist
glabel status
gpart show


Alright. This is CLEAN Install of FreeNAS 8.3 x64 On the server (Specs in my first post.) Installed on ada0, Which is the 250GB WDC Drive listed in my output:

zpool import
Code:
[root@freenas] ~# zpool import
   pool: zfs
     id: 331113494358268866
  state: UNAVAIL
 status: One or more devices are missing from the system.
 action: The pool cannot be imported. Attach the missing
        devices and try again.
   see: http://www.sun.com/msg/ZFS-8000-3C
 config:

        zfs                       UNAVAIL  insufficient replicas
          raidz1-0                UNAVAIL  insufficient replicas
            3991037278626503309   UNAVAIL  cannot open
            14980719057705672504  UNAVAIL  cannot open
            69528833404146203     UNAVAIL  cannot open
            ada1                  ONLINE

camcontrol devlist
Code:
[root@freenas] ~# camcontrol devlist
<WDC WD2500JD-00HBB0 08.02D08>     at scbus0 target 0 lun 0 (pass0,ada0)
<WDC WD10EALX-009BA0 15.01H15>     at scbus1 target 0 lun 0 (pass1,ada1)
<ST31000524AS JC4B>                at scbus2 target 0 lun 0 (pass2,ada2)
<ST31000524AS JC4B>                at scbus3 target 0 lun 0 (pass3,ada3)
<ST31000524AS JC4B>                at scbus4 target 0 lun 0 (pass4,ada4)

glabel status
Code:
[root@freenas] ~# glabel status
                                      Name  Status  Components
                             ufs/FreeNASs3     N/A  ada0s3
                             ufs/FreeNASs4     N/A  ada0s4
gptid/0df5e718-5a11-11e2-9a01-0025902238e8     N/A  ada2p2
                            ufs/FreeNASs1a     N/A  ada0s1a
gptid/0d45ad62-5a11-11e2-9a01-0025902238e8     N/A  ada3p2
gptid/0da39302-5a11-11e2-9a01-0025902238e8     N/A  ada4p2

gpart show
Code:
[root@freenas] ~# gpart show
=>       63  488397105  ada0  MBR  (232G)
         63    1930257     1  freebsd  [active]  (942M)
    1930320         63        - free -  (31k)
    1930383    1930257     2  freebsd  (942M)
    3860640       3024     3  freebsd  (1.5M)
    3863664      41328     4  freebsd  (20M)
    3904992  484492176        - free -  (231G)

=>        34  1953525101  ada2  GPT  (931G) [CORRUPT]
          34          94        - free -  (47k)
         128     4194304     1  freebsd-swap  (2.0G)
     4194432  1949330703     2  freebsd-zfs  (929G)

=>      0  1930257  ada0s1  BSD  (942M)
        0       16          - free -  (8.0k)
       16  1930241       1  !0  (942M)

=>        34  1953525101  ada3  GPT  (931G) [CORRUPT]
          34          94        - free -  (47k)
         128     4194304     1  freebsd-swap  (2.0G)
     4194432  1949330703     2  freebsd-zfs  (929G)

=>        34  1953525101  ada4  GPT  (931G) [CORRUPT]
          34          94        - free -  (47k)
         128     4194304     1  freebsd-swap  (2.0G)
     4194432  1949330703     2  freebsd-zfs  (929G)


Let me know if you need me to run any other commands.
 

paleoN

Wizard
Joined
Apr 22, 2012
Messages
1,402
No, It did not.
The drivers were blank when I made the volume, and I did it all in the GUI.
This is FreeNAS 8.3 and I simple created the volume, then created shares.
Then you didn't create this pool via FreeNAS 8.3 GUI unless you think you found some obscure bug.

Code:
gpart list ada4

zdb -l /dev/ada1

zdb -l /dev/ada3

zdb -l /dev/ada3p2


Sorry for the double post. Page broke for me and made me think it didnt post.
Just delete the double post.
 

Xylex

Dabbler
Joined
Jan 30, 2013
Messages
11
Then you didn't create this pool via FreeNAS 8.3 GUI unless you think you found some obscure bug.

1) All the drivers were blank from the start
2) I CREATED the zfs RAIDZ1 Pool In the FreeNAS 8.3 GUI
3) I Created the shares and started to store data on them.

Code:
gpart list ada4

zdb -l /dev/ada1

zdb -l /dev/ada3

zdb -l /dev/ada3p2

gpart list ada4
Code:
[root@freenas] ~# gpart list ada4
Geom name: ada4
modified: false
state: CORRUPT
fwheads: 16
fwsectors: 63
last: 1953525134
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: ada4p1
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 65536
   Mode: r1w1e0
   rawuuid: 0d912e13-5a11-11e2-9a01-0025902238e8
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2147483648
   offset: 65536
   type: freebsd-swap
   index: 1
   end: 4194431
   start: 128
2. Name: ada4p2
   Mediasize: 998057319936 (929G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 2147549184
   Mode: r0w0e0
   rawuuid: 0da39302-5a11-11e2-9a01-0025902238e8
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 998057319936
   offset: 2147549184
   type: freebsd-zfs
   index: 2
   end: 1953525134
   start: 4194432
Consumers:
1. Name: ada4
   Mediasize: 1000204886016 (931G)
   Sectorsize: 512
   Mode: r1w1e1

zdb -l /dev/ada1
Code:
[root@freenas] ~# zdb -l /dev/ada1
--------------------------------------------
LABEL 0
--------------------------------------------
    version: 28
    name: 'zfs'
    state: 0
    txg: 190705
    pool_guid: 331113494358268866
    hostid: 3549322712
    hostname: 'freenas.local'
    top_guid: 16107481134572789769
    guid: 9766453488593238848
    vdev_children: 1
    vdev_tree:
        type: 'raidz'
        id: 0
        guid: 16107481134572789769
        nparity: 1
        metaslab_array: 23
        metaslab_shift: 35
        ashift: 9
        asize: 4000799784960
        is_log: 0
        children[0]:
            type: 'disk'
            id: 0
            guid: 3991037278626503309
            path: '/dev/ada3'
            phys_path: '/dev/ada3'
            whole_disk: 0
            DTL: 37
        children[1]:
            type: 'disk'
            id: 1
            guid: 14980719057705672504
            path: '/dev/ada0'
            phys_path: '/dev/ada0'
            whole_disk: 0
            DTL: 36
        children[2]:
            type: 'disk'
            id: 2
            guid: 69528833404146203
            path: '/dev/ada2'
            phys_path: '/dev/ada2'
            whole_disk: 0
            DTL: 35
        children[3]:
            type: 'disk'
            id: 3
            guid: 9766453488593238848
            path: '/dev/ada1'
            phys_path: '/dev/ada1'
            whole_disk: 0
            DTL: 34
--------------------------------------------
LABEL 1
--------------------------------------------
    version: 28
    name: 'zfs'
    state: 0
    txg: 190705
    pool_guid: 331113494358268866
    hostid: 3549322712
    hostname: 'freenas.local'
    top_guid: 16107481134572789769
    guid: 9766453488593238848
    vdev_children: 1
    vdev_tree:
        type: 'raidz'
        id: 0
        guid: 16107481134572789769
        nparity: 1
        metaslab_array: 23
        metaslab_shift: 35
        ashift: 9
        asize: 4000799784960
        is_log: 0
        children[0]:
            type: 'disk'
            id: 0
            guid: 3991037278626503309
            path: '/dev/ada3'
            phys_path: '/dev/ada3'
            whole_disk: 0
            DTL: 37
        children[1]:
            type: 'disk'
            id: 1
            guid: 14980719057705672504
            path: '/dev/ada0'
            phys_path: '/dev/ada0'
            whole_disk: 0
            DTL: 36
        children[2]:
            type: 'disk'
            id: 2
            guid: 69528833404146203
            path: '/dev/ada2'
            phys_path: '/dev/ada2'
            whole_disk: 0
            DTL: 35
        children[3]:
            type: 'disk'
            id: 3
            guid: 9766453488593238848
            path: '/dev/ada1'
            phys_path: '/dev/ada1'
            whole_disk: 0
            DTL: 34
--------------------------------------------
LABEL 2
--------------------------------------------
    version: 28
    name: 'zfs'
    state: 0
    txg: 190705
    pool_guid: 331113494358268866
    hostid: 3549322712
    hostname: 'freenas.local'
    top_guid: 16107481134572789769
    guid: 9766453488593238848
    vdev_children: 1
    vdev_tree:
        type: 'raidz'
        id: 0
        guid: 16107481134572789769
        nparity: 1
        metaslab_array: 23
        metaslab_shift: 35
        ashift: 9
        asize: 4000799784960
        is_log: 0
        children[0]:
            type: 'disk'
            id: 0
            guid: 3991037278626503309
            path: '/dev/ada3'
            phys_path: '/dev/ada3'
            whole_disk: 0
            DTL: 37
        children[1]:
            type: 'disk'
            id: 1
            guid: 14980719057705672504
            path: '/dev/ada0'
            phys_path: '/dev/ada0'
            whole_disk: 0
            DTL: 36
        children[2]:
            type: 'disk'
            id: 2
            guid: 69528833404146203
            path: '/dev/ada2'
            phys_path: '/dev/ada2'
            whole_disk: 0
            DTL: 35
        children[3]:
            type: 'disk'
            id: 3
            guid: 9766453488593238848
            path: '/dev/ada1'
            phys_path: '/dev/ada1'
            whole_disk: 0
            DTL: 34
--------------------------------------------
LABEL 3
--------------------------------------------
    version: 28
    name: 'zfs'
    state: 0
    txg: 190705
    pool_guid: 331113494358268866
    hostid: 3549322712
    hostname: 'freenas.local'
    top_guid: 16107481134572789769
    guid: 9766453488593238848
    vdev_children: 1
    vdev_tree:
        type: 'raidz'
        id: 0
        guid: 16107481134572789769
        nparity: 1
        metaslab_array: 23
        metaslab_shift: 35
        ashift: 9
        asize: 4000799784960
        is_log: 0
        children[0]:
            type: 'disk'
            id: 0
            guid: 3991037278626503309
            path: '/dev/ada3'
            phys_path: '/dev/ada3'
            whole_disk: 0
            DTL: 37
        children[1]:
            type: 'disk'
            id: 1
            guid: 14980719057705672504
            path: '/dev/ada0'
            phys_path: '/dev/ada0'
            whole_disk: 0
            DTL: 36
        children[2]:
            type: 'disk'
            id: 2
            guid: 69528833404146203
            path: '/dev/ada2'
            phys_path: '/dev/ada2'
            whole_disk: 0
            DTL: 35
        children[3]:
            type: 'disk'
            id: 3
            guid: 9766453488593238848
            path: '/dev/ada1'
            phys_path: '/dev/ada1'
            whole_disk: 0
            DTL: 34

zdb -l /dev/ada3
Code:
[root@freenas] ~# zdb -l /dev/ada3
--------------------------------------------
LABEL 0
--------------------------------------------
    version: 28
    name: 'zfs'
    state: 0
    txg: 190705
    pool_guid: 331113494358268866
    hostid: 3549322712
    hostname: 'freenas.local'
    top_guid: 16107481134572789769
    guid: 69528833404146203
    vdev_children: 1
    vdev_tree:
        type: 'raidz'
        id: 0
        guid: 16107481134572789769
        nparity: 1
        metaslab_array: 23
        metaslab_shift: 35
        ashift: 9
        asize: 4000799784960
        is_log: 0
        children[0]:
            type: 'disk'
            id: 0
            guid: 3991037278626503309
            path: '/dev/ada3'
            phys_path: '/dev/ada3'
            whole_disk: 0
            DTL: 37
        children[1]:
            type: 'disk'
            id: 1
            guid: 14980719057705672504
            path: '/dev/ada0'
            phys_path: '/dev/ada0'
            whole_disk: 0
            DTL: 36
        children[2]:
            type: 'disk'
            id: 2
            guid: 69528833404146203
            path: '/dev/ada2'
            phys_path: '/dev/ada2'
            whole_disk: 0
            DTL: 35
        children[3]:
            type: 'disk'
            id: 3
            guid: 9766453488593238848
            path: '/dev/ada1'
            phys_path: '/dev/ada1'
            whole_disk: 0
            DTL: 34
--------------------------------------------
LABEL 1
--------------------------------------------
    version: 28
    name: 'zfs'
    state: 0
    txg: 190705
    pool_guid: 331113494358268866
    hostid: 3549322712
    hostname: 'freenas.local'
    top_guid: 16107481134572789769
    guid: 69528833404146203
    vdev_children: 1
    vdev_tree:
        type: 'raidz'
        id: 0
        guid: 16107481134572789769
        nparity: 1
        metaslab_array: 23
        metaslab_shift: 35
        ashift: 9
        asize: 4000799784960
        is_log: 0
        children[0]:
            type: 'disk'
            id: 0
            guid: 3991037278626503309
            path: '/dev/ada3'
            phys_path: '/dev/ada3'
            whole_disk: 0
            DTL: 37
        children[1]:
            type: 'disk'
            id: 1
            guid: 14980719057705672504
            path: '/dev/ada0'
            phys_path: '/dev/ada0'
            whole_disk: 0
            DTL: 36
        children[2]:
            type: 'disk'
            id: 2
            guid: 69528833404146203
            path: '/dev/ada2'
            phys_path: '/dev/ada2'
            whole_disk: 0
            DTL: 35
        children[3]:
            type: 'disk'
            id: 3
            guid: 9766453488593238848
            path: '/dev/ada1'
            phys_path: '/dev/ada1'
            whole_disk: 0
            DTL: 34
--------------------------------------------
LABEL 2
--------------------------------------------
    version: 28
    name: 'zfs'
    state: 0
    txg: 190705
    pool_guid: 331113494358268866
    hostid: 3549322712
    hostname: 'freenas.local'
    top_guid: 16107481134572789769
    guid: 69528833404146203
    vdev_children: 1
    vdev_tree:
        type: 'raidz'
        id: 0
        guid: 16107481134572789769
        nparity: 1
        metaslab_array: 23
        metaslab_shift: 35
        ashift: 9
        asize: 4000799784960
        is_log: 0
        children[0]:
            type: 'disk'
            id: 0
            guid: 3991037278626503309
            path: '/dev/ada3'
            phys_path: '/dev/ada3'
            whole_disk: 0
            DTL: 37
        children[1]:
            type: 'disk'
            id: 1
            guid: 14980719057705672504
            path: '/dev/ada0'
            phys_path: '/dev/ada0'
            whole_disk: 0
            DTL: 36
        children[2]:
            type: 'disk'
            id: 2
            guid: 69528833404146203
            path: '/dev/ada2'
            phys_path: '/dev/ada2'
            whole_disk: 0
            DTL: 35
        children[3]:
            type: 'disk'
            id: 3
            guid: 9766453488593238848
            path: '/dev/ada1'
            phys_path: '/dev/ada1'
            whole_disk: 0
            DTL: 34
--------------------------------------------
LABEL 3
--------------------------------------------
    version: 28
    name: 'zfs'
    state: 0
    txg: 190705
    pool_guid: 331113494358268866
    hostid: 3549322712
    hostname: 'freenas.local'
    top_guid: 16107481134572789769
    guid: 69528833404146203
    vdev_children: 1
    vdev_tree:
        type: 'raidz'
        id: 0
        guid: 16107481134572789769
        nparity: 1
        metaslab_array: 23
        metaslab_shift: 35
        ashift: 9
        asize: 4000799784960
        is_log: 0
        children[0]:
            type: 'disk'
            id: 0
            guid: 3991037278626503309
            path: '/dev/ada3'
            phys_path: '/dev/ada3'
            whole_disk: 0
            DTL: 37
        children[1]:
            type: 'disk'
            id: 1
            guid: 14980719057705672504
            path: '/dev/ada0'
            phys_path: '/dev/ada0'
            whole_disk: 0
            DTL: 36
        children[2]:
            type: 'disk'
            id: 2
            guid: 69528833404146203
            path: '/dev/ada2'
            phys_path: '/dev/ada2'
            whole_disk: 0
            DTL: 35
        children[3]:
            type: 'disk'
            id: 3
            guid: 9766453488593238848
            path: '/dev/ada1'
            phys_path: '/dev/ada1'
            whole_disk: 0
            DTL: 34

zdb -l /dev/ada3p2
Code:
[root@freenas] ~# zdb -l /dev/ada3p2
--------------------------------------------
LABEL 0
--------------------------------------------
failed to unpack label 0
--------------------------------------------
LABEL 1
--------------------------------------------
failed to unpack label 1
--------------------------------------------
LABEL 2
--------------------------------------------
failed to unpack label 2
--------------------------------------------
LABEL 3
--------------------------------------------
failed to unpack label 3
 

paleoN

Wizard
Joined
Apr 22, 2012
Messages
1,402
Before you shutdown output of:
Code:
swapctl -ls


2) I CREATED the zfs RAIDZ1 Pool In the FreeNAS 8.3 GUI
Congrats on the bug then.

According to at least two of your drives ZFS was always using the whole disks and not the partitions. After running the above command shutdown and reorder the disks how they last successfully appeared in the pool, which is something like:
Code:
Keep ada1 where it is.

Swap ada3 & ada2.

Swap ada4 & ada0.


Then boot up mfsBSD or the latest live FreeBSD BETA.

Code:
zpool import

zdb -l /dev/ada0
zdb -l /dev/ada0p2

zdb -l /dev/ada2
zdb -l /dev/ada2p2

zdb -l /dev/ada3
zdb -l /dev/ada3p2
 

Xylex

Dabbler
Joined
Jan 30, 2013
Messages
11
I am jumping the gun with my response.

I did zpool import on a mfsBSD live CD and found that the pool was healthy.
My stomach churned when I found I was able to browse my pool.

I am currently copying everything off of the pool.

Is there any more information I can post that can and will help identify and resolve this potential bug?
What is it exactly is happening that shouldn't be? It seems to relate to the manner I created the volume?
 

paleoN

Wizard
Joined
Apr 22, 2012
Messages
1,402
I am jumping the gun with my response.
I suppose I'll jump the gun with mine then. :eek:

I did zpool import on a mfsBSD live CD and found that the pool was healthy.
Excellent. :D Given that one of the other drives showed online earlier, hope should certainly have been alive.

I am currently copying everything off of the pool.
+1 Would have suggested this very thing.

Is there any more information I can post that can and will help identify and resolve this potential bug?
I have been doubtful that there was a bug. The exact FreeNAS version, build, etc and the steps you took to create the pool in excruciating detail. Then successfully reproduce the problem.

What is it exactly is happening that shouldn't be? It seems to relate to the manner I created the volume?
I don't suppose your ran beforehand:
Code:
swapctl -ls
Moving all the drives around should be completely unnecessary. FreeBSD should simply taste the disks again and find who's where. Furthermore, the 2 sets of labels you posted appeared intact. My suspicion is FreeNAS enabled swap on the 3 partitioned drives, which held the partition open and which screwed up ZFS from being able to properly open the drives itself.
 
Status
Not open for further replies.
Top