Disk UNAVAIL after GPT recovery; cannot import pool

Status
Not open for further replies.

Lage

Cadet
Joined
Aug 7, 2013
Messages
8
Hi!

I have got a problem with my FreeNAS system. The problem arose when I was going to replace a mirror of disks with another pair. I have searched the forum and the rest of the web, but not quite found anybody having had the same problem.

Setup:
  • FreeNAS-8.3.1-RELEASE-p2-x64 (r12686+b770da6_dirty), on a USB stick.
  • 2 * 2 TB disks, mirrored
  • 2 * 500 GB disks, mirrored
  • One zpool in the system, consisting of both vdevs.
Sequence of events:
  • Physically removed one of the 500 GB drives.
  • Physically attached another drive. Started machine.
  • (gpart new drive as desired)
  • zpool replace <pool> <old_drive> <new_drive>
  • Resilver finished.
So far, so good. Pool up and running, mirrored and all. Then:
  • zpool detach <pool> <old_drive>
  • Zpool in degraded, but functioning, state (as expected).
  • Turned machine off and physically removed remaining 500GB drive.
  • Started machine.
What happens now is that the new drive is "UNAVAIL", and gpart shows the GPT is corrupted. After googling a bit, GPT was recovered using gpart recover. I now re-attached the last remaining 500 GB drive, thinking it may be able to help out somehow (it should still contain the exact same data as the newly attached drive).

zpool import won't let me import the pool, however, zpool import -V shows me the following:

Code:
zpool import -V
  pool: brumund
    id: 7591759659210400881
  state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
        devices and try again.
  see: http://www.sun.com/msg/ZFS-8000-3C
config:
 
        brumund                                        UNAVAIL  insufficient replicas
          mirror-0                                      ONLINE
            gptid/1df5b952-9149-11e1-813a-00261802e7bd  ONLINE
            gptid/263a6d98-9149-11e1-813a-00261802e7bd  ONLINE
          12093535385621290284                          UNAVAIL  cannot open

As far as I'm concerned, 12093535385621290284 should refer to the new drive.

If it's any help, here's the output from gpart status:
Code:
gpart status
  Name  Status  Components
ada0p1      OK  ada0
ada0p2      OK  ada0
ada1p1      OK  ada1
ada1p2      OK  ada1
ada2p1      OK  ada2
ada2p2      OK  ada2
ada3p1      OK  ada3
ada3p2      OK  ada3
 da0s1      OK  da0
 da0s2      OK  da0
 da0s3      OK  da0
 da0s4      OK  da0
da0s1a      OK  da0s1
da0s2a      OK  da0s2

and gpart list gives me:
Code:
Geom name: ada0
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 3907029134
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: ada0p1
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e1
   rawuuid: 4470865a-9147-11e1-89df-00261802e7bd
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2147483648
   offset: 65536
   type: freebsd-swap
   index: 1
   end: 4194431
   start: 128
2. Name: ada0p2
   Mediasize: 1998251367936 (1.8T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   rawuuid: 263a6d98-9149-11e1-813a-00261802e7bd
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 1998251367936
   offset: 2147549184
   type: freebsd-zfs
   index: 2
   end: 3907029134
   start: 4194432
Consumers:
1. Name: ada0
   Mediasize: 2000398934016 (1.8T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e2

Geom name: ada1
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 3907029134
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: ada1p1
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e1
   rawuuid: c985c7c1-9148-11e1-813a-00261802e7bd
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2147483648
   offset: 65536
   type: freebsd-swap
   index: 1
   end: 4194431
   start: 128
2. Name: ada1p2
   Mediasize: 1998251367936 (1.8T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   rawuuid: 1df5b952-9149-11e1-813a-00261802e7bd
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 1998251367936
   offset: 2147549184
   type: freebsd-zfs
   index: 2
   end: 3907029134
   start: 4194432
Consumers:
1. Name: ada1
   Mediasize: 2000398934016 (1.8T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e2

Geom name: ada2
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 1953525134
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: ada2p1
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e1
   rawuuid: 9f4a4e10-ff4c-11e2-bced-00261802e7bd
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2147483648
   offset: 65536
   type: freebsd-swap
   index: 1
   end: 4194431
   start: 128
2. Name: ada2p2
   Mediasize: 996432412672 (928G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   rawuuid: 5c4b488f-ff4d-11e2-bced-00261802e7bd
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 996432412672
   offset: 2147549184
   type: freebsd-zfs
   index: 2
   end: 1950351487
   start: 4194432
Consumers:
1. Name: ada2
   Mediasize: 1000204886016 (931G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e2

Geom name: ada3
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 976773134
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: ada3p1
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 65536
   Mode: r1w1e1
   rawuuid: 0e5bdc3d-f1d1-11e1-81c0-00261802e7bd
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2147483648
   offset: 65536
   type: freebsd-swap
   index: 1
   end: 4194431
   start: 128
2. Name: ada3p2
   Mediasize: 497960295936 (463G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 2147549184
   Mode: r0w0e0
   rawuuid: 0e6c87c5-f1d1-11e1-81c0-00261802e7bd
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 497960295936
   offset: 2147549184
   type: freebsd-zfs
   index: 2
   end: 976773134
   start: 4194432
Consumers:
1. Name: ada3
   Mediasize: 500107862016 (465G)
   Sectorsize: 512
   Mode: r1w1e2

Geom name: da0
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 7821311
first: 63
entries: 4
scheme: MBR
Providers:
1. Name: da0s1
   Mediasize: 988291584 (942M)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 32256
   Mode: r1w0e1
   attrib: active
   rawtype: 165
   length: 988291584
   offset: 32256
   type: freebsd
   index: 1
   end: 1930319
   start: 63
2. Name: da0s2
   Mediasize: 988291584 (942M)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 988356096
   Mode: r0w0e0
   rawtype: 165
   length: 988291584
   offset: 988356096
   type: freebsd
   index: 2
   end: 3860639
   start: 1930383
3. Name: da0s3
   Mediasize: 1548288 (1.5M)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 1976647680
   Mode: r0w0e0
   rawtype: 165
   length: 1548288
   offset: 1976647680
   type: freebsd
   index: 3
   end: 3863663
   start: 3860640
4. Name: da0s4
   Mediasize: 21159936 (20M)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 1978195968
   Mode: r1w1e2
   rawtype: 165
   length: 21159936
   offset: 1978195968
   type: freebsd
   index: 4
   end: 3904991
   start: 3863664
Consumers:
1. Name: da0
   Mediasize: 4004511744 (3.7G)
   Sectorsize: 512
   Mode: r2w1e4

Geom name: da0s1
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 1930256
first: 0
entries: 8
scheme: BSD
Providers:
1. Name: da0s1a
   Mediasize: 988283392 (942M)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 40448
   Mode: r1w0e1
   rawtype: 0
   length: 988283392
   offset: 8192
   type: !0
   index: 1
   end: 1930256
   start: 16
Consumers:
1. Name: da0s1
   Mediasize: 988291584 (942M)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 32256
   Mode: r1w0e1

Geom name: da0s2
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 1930256
first: 0
entries: 8
scheme: BSD
Providers:
1. Name: da0s2a
   Mediasize: 988283392 (942M)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 988364288
   Mode: r0w0e0
   rawtype: 0
   length: 988283392
   offset: 8192
   type: !0
   index: 1
   end: 1930256
   start: 16
Consumers:
1. Name: da0s2
   Mediasize: 988291584 (942M)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 988356096
   Mode: r0w0e0


Please help!
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Any reason you aren't following the instructions in the FreeNAS manual for disk replacement? You wouldn't have had this problem....
 

Lage

Cadet
Joined
Aug 7, 2013
Messages
8
That reason might have been that I have done this before, it worked fine as far as I can remember, and I thought I knew what I was doing.

The sad part is that whether we know or don't know why I didn't follow instructions, doesn't change the fact that it's not working.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Well, if you had a typo in a command it can sufficiently mess things up so that everything looks fine until you start digging very deep. Your data is probably safe, just inaccessible at the moment. If your zpool isn't v5000 then you should be able to go back to 8.3.1 and your pool should automount. I just woke up, but I'm pretty sure you aren't the first person to have done stuff from the command line in 9.1 and the pool "should" mount but doesn't. I just don't have a fix though.

This is yet another example of "if the GUI can do it... you should do it from the GUI".
 

Lage

Cadet
Joined
Aug 7, 2013
Messages
8
Right!
I would like to emphasize though, that this was indeed done in a 8.3.1 system.
I also presume that the data is safe (in some sense), but inaccessible, as you say.
What strikes me as most weird is that what worked before a reboot does not work after reboot.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
What strikes me as most weird is that what worked before a reboot does not work after reboot.

I agree 100%. I can only presume some changes was made (either from your actions or something that was automated) that affected the process of mounting the zpool after it had mounted. For example, if you delete the boot loader from a hard drive you booted from the computer will work just fine...until you reboot.

I keep thinking you did this in 9.1 for some reason. If you did that from 8.3.1 I'm even more confused. /shrug
 

Lage

Cadet
Joined
Aug 7, 2013
Messages
8
Unless FreeNAS has somehow updated itself without my knowledge, it can't possibly be 9.1, since I haven't even tried to touch that yet.

Let's see: Could it be a good idea to do a new install, or perhaps go back to an older backup, of the FreeNAS system that is on the USB stick, in order to re-import the zpool? Should I try exporting the pool first? I haven't quite got the hang of the whole idea of exporting pools, to be honest. Then there are all these possible flags...

For example, if you delete the boot loader from a hard drive you booted from the computer will work just fine...until you reboot.
Now that you mention it, the "new" disk in this case did have GRUB installed before, when it was in another computer. I didn't think of specifically removing that. This feels very much like a n00b question, but could that be the problem here - even after having re-partitioned the whole disk (see below)?
Code:
[root@freenas] ~# gpart show ada2
=>        34  1953525101  ada2  GPT  (931G)
          34          94        - free -  (47k)
        128    4194304    1  freebsd-swap  (2.0G)
    4194432  1946157056    2  freebsd-zfs  (928G)
  1950351488    3173647        - free -  (1.5G)

I'm going to hook a monitor up to the NAS to see how it behaves at startup, perhaps fiddle with the boot order. It should not have booted from the HDD, since FreeNAS is on a USB stick and that obviously loaded properly.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
You'll know if you boot from the wrong device. When FreeNAS uses a data disk it creates a bootloader that gives a message on your screen if you try to boot from it. It's something to the effect of "This is a FreeNAS data disk and does not have an OS."
 

Lage

Cadet
Joined
Aug 7, 2013
Messages
8
Good point. I was thinking along the lines of the system trying to boot from a data disk or something, before changing its mind... Anyway, I verified that it boots "as usual".

Looking around the forum and elsewhere, it sure does look very much like the problem that many people upgrading to 9.1 have. I just wish there was a way to attach/add disks to a zpool without importing it first, since this feels very much like a catch 22 by now. If that had been a possibility, ada3 (the old disk still containing the missing data) might be able to provide some help, whereas it seems entirely out of the question for now. Don't know how many posts I've read this evening. Night-time now. Thanks for the help so far.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I'm sure I missed you before bed, but your zpool is a mirror, right?

Well, I just noticed something. Your "mirror" also has a stripe. Here's your post for zpool import -V...

Code:
zpool import -V
  pool: brumund
    id: 7591759659210400881
  state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
        devices and try again.
  see: http://www.sun.com/msg/ZFS-8000-3C
config:
 
        brumund                                        UNAVAIL  insufficient replicas
          mirror-0                                      ONLINE
            gptid/1df5b952-9149-11e1-813a-00261802e7bd  ONLINE
            gptid/263a6d98-9149-11e1-813a-00261802e7bd  ONLINE
          12093535385621290284                          UNAVAIL  cannot open


I just created a 2 disk mirror, then offlined a disk. Here's my output...


Code:
[root@freenas] ~# zpool import -V
  pool: test
    id: 16502435969112585818
  state: DEGRADED
status: One or more devices are offlined.
action: The pool can be imported despite missing or damaged devices.  The
        fault tolerance of the pool may be compromised if imported.
config:
 
        test                                            DEGRADED
          mirror-0                                      DEGRADED
            gptid/4d57f46f-ffbe-11e2-bbdb-080027c5a455  ONLINE
            15438284375440495516                        OFFLINE
[root@freenas] ~#


Notice the formatting? Your device that was 12093535385621290284 is NOT part of a mirror. It's actually a striped vdev. That's why your zpool isn't mounting. Now, it would be a stripe if you had a 2 disk mirror and then removed a disk and the other one happened to have been bad. It definitely won't be the first time a disk went bad when you tried to use it.

Remember how you had to repair the GPT? You didn't specifically mention this, but I bet that the disk you had to repair the partition table was also your new disk(and happened to be your stripe).

In essence, you removed redundancy from the vdev when you did a disk replacement. At that point you had a single point of failure. If that one disk went bad you were screwed. It looks like that is exactly what may have happened to you.

The proper way to have done the upgrade and kept redundancy was to add a new disk to the vdev, then remove a disk, then add a disk, then remove a disk. Doing it that way ensured you always had redundancy because during resilver you should have always had 2 full copies of your data or 2 copies plus a resilvering in progress. RAID5 was proclaimed dead in 2009 because unrecoverable error rates of disks made it possible(and in some cases virtually a certainty) that if you lost 1 disk you would not be able to rebuild the array(for ZFS, resilver the new disk) before you'd encounter an error. The link is in my signature if you want to read up on it.

Now, as for why the GPT went bad, that's a bit harder to determine. Can you post the SMART output for the disk that you have to do the GPT repair on in CODE please and the "new" disk you added that had completed the resilver if they aren't the same? Normally its smartctl -q noserial -a /dev/adaX. This will at least tell us if the disk is showing signs of failing. If it doesn't indicate the disk is having problems, then I think its most likely you hit some kind of bug in FreeNAS(possibly related to doing it from the command line) or you may have fat fingered some command and did something unintentional. I'll be honest though. I think that if you had done it from the command line and not had this issue the worst thing that could have happened is that the GUI wouldn't have matched your actual configuration and you'd have been forced to detach and then autoimport the zpool to make the GUI behave properly.

Can you post your hardware specs for the server too?
 

Lage

Cadet
Joined
Aug 7, 2013
Messages
8
Wow, that's an extensive response. I will try to answer and clarify best I can.

1. It is correct that the disk where the GPT had to be fixed/recovered is the same disk as my new one (referred to as "ada2-new" below).
2. The pool used to be a 2-way mirror + another 2-way mirror, striped; each mirror making up a vdev, if I'm not mistaken? I notice the difference in the formatting that you provided. It still doesn't explain why the disk is "UNAVAIL", though, as should be clear by my description of events below. It may be worth pointing out that i did the gpt recover after the disk first showed as UNAVAIL.
3. I did S.M.A.R.T. tests on all disks before beginning, both the 4 in the zpool and the 2 that were going to replace the old ones. All passed. Will post results from the new disk below.

I will try my best to explain how the new disk used to be part of a mirror, using a kind of "pseudo-notation" on how things were done, step by step. The whole idea was to replace both replicas in mirror-1 with new disks. Part of the problem is that the motherboard only has 4 SATA-ports, or else I would obviously have connected the third disk to the mirror before removing any. In this case, I lost redundancy for a while, and that was a risk I took. (It would obviously have been a good idea to have spare SATA connectors, but that's unfortunately not the case.)
It should be clear from my description below, though, that it was not the lack of redundancy that caused the problem, or at least it doesn't look like it to me, but I'm open for a second opinion!

Oh, and I did indeed read your post on RAID5 earlier on yesterday. ;-)

So, I began with:
Code:
  pool: brumund
  state: HEALTHY
config:
 
        brumund                                        HEALTHY
          mirror-0                                      ONLINE
            <ada0>                                      ONLINE
            <ada1>                                      ONLINE
          mirror-1                                      ONLINE
            <ada2-old>                                  ONLINE
            <ada3-old>                                  ONLINE

I.e., two striped mirrors, much like a RAID 1+0, if I'm not mistaken.
I removed ada2-old, such that:
Code:
 
  pool: brumund
  state: DEGRADED
config:
 
        brumund                                        DEGRADED
          mirror-0                                      ONLINE
            <ada0>                                      ONLINE
            <ada1>                                      ONLINE
          mirror-1                                      DEGRADED
            <ada2-old>                                  UNAVAIL
            <ada3-old>                                  ONLINE

Connected the new disk at ada2, such that
Code:
  pool: brumund
  state: DEGRADED
config:
 
        brumund                                        DEGRADED
          mirror-0                                      ONLINE
            <ada0>                                      ONLINE
            <ada1>                                      ONLINE
          mirror-1                                      DEGRADED
            <ada2-new>                                  (?) [resilvering]
            <ada3-old>                                  ONLINE

Then, after resilvering finished:
Code:
  pool: brumund
  state: DEGRADED
config:
 
        brumund                                        ONLINE
          mirror-0                                      ONLINE
            <ada0>                                      ONLINE
            <ada1>                                      ONLINE
          mirror-1                                      ONLINE
            <ada2-new>                                  ONLINE
            <ada3-old>                                  ONLINE

So far, so good! I believe that that's when I made the worst mistake. I offlined ada3-old before a reboot, in order to be able to attach the other new disk. Hence I had:
Code:
  pool: brumund
  state: DEGRADED
config:
 
        brumund                                        DEGRADED
          mirror-0                                      ONLINE
            <ada0>                                      ONLINE
            <ada1>                                      ONLINE
          mirror-1                                      DEGRADED
            <ada2-new>                                  ONLINE

Then I rebooted. And since then FreeNAS won't accept ada2-new as a valid part of mirror-1. (I may be wrong here, such that FreeNAS didn't actually consider ada2-new to be part of a mirror, but rather a single vdev.)

This is becoming a long post...

S.M.A.R.T. output for the "new" disk, which is the same as the one where I had to fix GPT, alias "ada2-new" above:
Code:
=== START OF INFORMATION SECTION ===
Model Family:    Western Digital Caviar Green (Adv. Format)
Device Model:    WDC WD10EARS-00MVWB0
Firmware Version: 51.0AB51
User Capacity:    1,000,204,886,016 bytes [1.00 TB]
Sector Size:      512 bytes logical/physical
Device is:        In smartctl database [for details use: -P show]
ATA Version is:  8
ATA Standard is:  Exact ATA specification draft version not indicated
Local Time is:    Thu Aug  8 13:14:26 2013 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
 
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
 
General SMART Values:
Offline data collection status:  (0x82) Offline data collection activity
                                        was completed without error.
                                        Auto Offline Data Collection: Enabled.
Self-test execution status:      (  0) The previous self-test routine completed
                                        without error or no self-test has ever
                                        been run.
Total time to complete Offline
data collection:                (19200) seconds.
Offline data collection
capabilities:                    (0x7b) SMART execute Offline immediate.
                                        Auto Offline data collection on/off support.
                                        Suspend Offline collection upon new
                                        command.
                                        Offline surface scan supported.
                                        Self-test supported.
                                        Conveyance Self-test supported.
                                        Selective Self-test supported.
SMART capabilities:            (0x0003) Saves SMART data before entering
                                        power-saving mode.
                                        Supports SMART auto save timer.
Error logging capability:        (0x01) Error logging supported.
                                        General Purpose Logging supported.
Short self-test routine
recommended polling time:        (  2) minutes.
Extended self-test routine
recommended polling time:        ( 188) minutes.
Conveyance self-test routine
recommended polling time:        (  5) minutes.
SCT capabilities:              (0x3035) SCT Status supported.
                                        SCT Feature Control supported.
                                        SCT Data Table supported.
 
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG    VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate    0x002f  200  200  051    Pre-fail  Always      -      0
  3 Spin_Up_Time            0x0027  253  164  021    Pre-fail  Always      -      2066
  4 Start_Stop_Count        0x0032  100  100  000    Old_age  Always      -      732
  5 Reallocated_Sector_Ct  0x0033  200  200  140    Pre-fail  Always      -      0
  7 Seek_Error_Rate        0x002e  200  200  000    Old_age  Always      -      0
  9 Power_On_Hours          0x0032  097  097  000    Old_age  Always      -      2236
10 Spin_Retry_Count        0x0032  100  100  000    Old_age  Always      -      0
11 Calibration_Retry_Count 0x0032  100  100  000    Old_age  Always      -      0
12 Power_Cycle_Count      0x0032  100  100  000    Old_age  Always      -      512
192 Power-Off_Retract_Count 0x0032  200  200  000    Old_age  Always      -      80
193 Load_Cycle_Count        0x0032  178  178  000    Old_age  Always      -      67409
194 Temperature_Celsius    0x0022  121  105  000    Old_age  Always      -      29
196 Reallocated_Event_Count 0x0032  200  200  000    Old_age  Always      -      0
197 Current_Pending_Sector  0x0032  200  200  000    Old_age  Always      -      0
198 Offline_Uncorrectable  0x0030  200  200  000    Old_age  Offline      -      0
199 UDMA_CRC_Error_Count    0x0032  200  200  000    Old_age  Always      -      0
200 Multi_Zone_Error_Rate  0x0008  200  200  000    Old_age  Offline      -      0
 
SMART Error Log Version: 1
No Errors Logged
 
SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Extended offline    Completed without error      00%      1415        -
# 2  Short offline      Completed without error      00%      1402        -
# 3  Short offline      Aborted by host              90%      1402        -
# 4  Extended offline    Aborted by host              90%      1402        -
 
SMART Selective self-test log data structure revision number 1
SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.


Last, but not least, hardware specs:
  • FreeNAS-8.3.1-RELEASE-p2-x64 (r12686+b770da6_dirty)
  • AMD Athlon(tm) Dual Core Processor 4850e
  • Motherboard: ASUS M2N68-AM PLUS
  • 2 * Kingston 2048MB DDR2 PC2-6400 800MHz (KVR800D2N5/2G)
 

Lage

Cadet
Joined
Aug 7, 2013
Messages
8
Then I had a thought this morning... It may be a long shot, though.

The way I see it, ada2-new and ada3-old should, each, contain the proper data for mirror-1. Would there be any chance that, if creating a new pool using e.g. ada0 and ada3-old, that these together would still contain my data, being half of a mirror each? Or does creation of a new zpool destroy all chances of recovering data for good?

If only there was a way to re-attach ada3 to the zpool...!
 

Lage

Cadet
Joined
Aug 7, 2013
Messages
8
Update:
I couldn't come up with any way of "repairing" my old pool.

Took a last shot at recovering the FreeNAS setup that I had made a backup of (through the GUI) before fiddling with the pool, and hooked up only the old disk of mirror-1 ("ada3-old" above), hoping that it would still be seen as a valid part of the pool. The pool was indeed there and showed the three disks (mirror-0 + the single disk from mirror-1), but the disk was deemed invalid ("corrupted data"), so I finally gave up. Good thing I have a lot of my stuff backed up through an online service. Just that it will take me a week or two to re-download...
 
Status
Not open for further replies.
Top