Help with zpool import: failed to create mountpoint - FreeNas 8.3.2

Status
Not open for further replies.

aarjay

Dabbler
Joined
May 28, 2016
Messages
19
Hi Fellow Forum Members,

I am hoping that someone will be able to guide me here.

Problem: Freenas UI reported zfs pool status 'UNKNOWN' and I 'detach' the pool through UI, tried 'Auto Import' through UI which failed, tried again this time using cli which reported failed to create mount point error. Unable to see the zfs pool in UI though zpool status report it is ONLINE

Hardware: HP Microserver N36L, 8 GB ECC RAM, Freenas 8.3.2

Chronological events/steps
1. Noticed one of the zfs pool is unavailable. I suspect either power or sata cable may have gone loose though I am not sure.
2. I shutdown the freeNas and re-seated all cables and boot it up hoping this will fix the problem
3. That sinking feeling : My rsync jobs to back up data to a remote location have been failing for months on failed pool due to IP address mismatch (Life has been busy due to the arrival of little one and I just failed to noticed the rsync error logs. Kicking myself here really)
I read the zfs manual and decide to detach and auto import the volume after convincing myself that all HDD are online
4. Detach the pool from UI
5. From UI, Auto import of zfs pool failed with not much detail about error
6. From cli zpool import from cli failed citing FAULTED state suggesting usage of -f switch
7. Tried zpool import -fF from cli which resulted in Failed to create mountpoint.

At this stage I decide to stop and not proceed any further as my desperation may lead into mistakes. I hope the data is recoverable. Do i have any hope?

Please help
Many thanks
 
Last edited:

aarjay

Dabbler
Joined
May 28, 2016
Messages
19
zpool status when i found the issue (there are two RAIDZ1 pool, storageTank2 is the problem one)
Code:
root@freenas] ~# zpool status storageTank1
  pool: storageTank1
state: ONLINE
  scan: scrub repaired 0 in 3h17m with 0 errors on Sun May  8 03:17:20 2016
config:

        NAME                                            STATE     READ WRITE CKSUM
        storageTank1                                    ONLINE       0     0     0
          raidz1-0                                      ONLINE       0     0     0
            gptid/81afb80c-f7b8-11e0-8b4b-984be1087f8d  ONLINE       0     0     0
            gptid/826ced2d-f7b8-11e0-8b4b-984be1087f8d  ONLINE       0     0     0
            gptid/832cc100-f7b8-11e0-8b4b-984be1087f8d  ONLINE       0     0     0

errors: No known data errors
[root@freenas] ~# zpool status storageTank2
cannot open 'storageTank2': no such pool
 

aarjay

Dabbler
Joined
May 28, 2016
Messages
19
Output of camcontrol devlist (Please Note, the 3 x Hitachi belong to problem pool i.e. storageTank2)
Code:
<ST2000DL003-9VT166 CC32>          at scbus0 target 0 lun 0 (pass0,ada0)
<ST2000DL003-9VT166 CC32>          at scbus1 target 0 lun 0 (pass1,ada1)
<ST2000DL003-9VT166 CC32>          at scbus2 target 0 lun 0 (pass2,ada2)
<Hitachi HDS5C4040ALE630 MPAOA3B0>  at scbus3 target 0 lun 0 (pass3,ada3)
<Hitachi HDS5C4040ALE630 MPAOA3B0>  at scbus4 target 0 lun 0 (pass4,ada4)
<Hitachi HDS5C4040ALE630 MPAOA3B0>  at scbus5 target 0 lun 0 (pass5,ada5)
< Patriot Memory PMAP>             at scbus6 target 0 lun 0 (pass6,da0)
 

aarjay

Dabbler
Joined
May 28, 2016
Messages
19
Output of gpart
Code:
=>        34  3907029101  ada0  GPT  (1.8T)
          34          94        - free -  (47k)
         128     4194304     1  freebsd-swap  (2.0G)
     4194432  3902834703     2  freebsd-zfs  (1.8T)

=>        34  3907029101  ada1  GPT  (1.8T)
          34          94        - free -  (47k)
         128     4194304     1  freebsd-swap  (2.0G)
     4194432  3902834703     2  freebsd-zfs  (1.8T)

=>        34  3907029101  ada2  GPT  (1.8T)
          34          94        - free -  (47k)
         128     4194304     1  freebsd-swap  (2.0G)
     4194432  3902834703     2  freebsd-zfs  (1.8T)

=>        34  7814037101  ada3  GPT  (3.7T)
          34          94        - free -  (47k)
         128     4194304     1  freebsd-swap  (2.0G)
     4194432  7809842696     2  freebsd-zfs  (3.7T)
  7814037128           7        - free -  (3.5k)

=>        34  7814037101  ada4  GPT  (3.7T)
          34          94        - free -  (47k)
         128     4194304     1  freebsd-swap  (2.0G)
     4194432  7809842696     2  freebsd-zfs  (3.7T)
  7814037128           7        - free -  (3.5k)

=>        34  7814037101  ada5  GPT  (3.7T)
          34          94        - free -  (47k)
         128     4194304     1  freebsd-swap  (2.0G)
     4194432  7809842696     2  freebsd-zfs  (3.7T)
  7814037128           7        - free -  (3.5k)

=>      63  15646657  da0  MBR  (7.5G)
        63   1930257    1  freebsd  [active]  (942M)
   1930320        63       - free -  (31k)
   1930383   1930257    2  freebsd  (942M)
   3860640      3024    3  freebsd  (1.5M)
   3863664     41328    4  freebsd  (20M)
   3904992  11741728       - free -  (5.6G)

=>      0  1930257  da0s1  BSD  (942M)
        0       16         - free -  (8.0k)
       16  1930241      1  !0  (942M)

=>      0  1930257  da0s2  BSD  (942M)
        0       16         - free -  (8.0k)
       16  1930241      1  !0  (942M)
 

aarjay

Dabbler
Joined
May 28, 2016
Messages
19
output of zpool import form command line

Code:
[root@freenas] ~# zpool import
   pool: storageTank2
     id: 2161851981665123156
  state: FAULTED
status: The pool metadata is corrupted.
action: The pool cannot be imported due to damaged devices or data.
        The pool may be active on another system, but can be imported using
        the '-f' flag.
   see: http://www.sun.com/msg/ZFS-8000-72
config:

        storageTank2                                    FAULTED  corrupted data
          raidz1-0                                      ONLINE
            gptid/16035408-f10b-11e2-a097-984be1087f8d  ONLINE
            gptid/16c352dd-f10b-11e2-a097-984be1087f8d  ONLINE
            gptid/17859654-f10b-11e2-a097-984be1087f8d  ONLINE
 

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
Please give details on how your two volumes, containing 3 drives each, are connected to the machine.
 

aarjay

Dabbler
Joined
May 28, 2016
Messages
19
output of zpool import with -f switch

Code:
[root@freenas] ~# zpool import -fF storageTank2
Pool storageTank2 returned to its state as of Sun May 22 21:42:03 2016.
Discarded approximately 5 seconds of transactions.
cannot mount '/storageTank2': failed to create mountpoint
 

aarjay

Dabbler
Joined
May 28, 2016
Messages
19
Finally output of zpool status after import -fF
Code:
  pool: storageTank2
state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-9P
  scan: scrub repaired 0 in 0h0m with 0 errors on Sat May 28 22:46:57 2016
config:

        NAME                                            STATE     READ WRITE CKSUM
        storageTank2                                    ONLINE       0     0     0
          raidz1-0                                      ONLINE       0     0     0
            gptid/16035408-f10b-11e2-a097-984be1087f8d  ONLINE       0     0     0
            gptid/16c352dd-f10b-11e2-a097-984be1087f8d  ONLINE       0     0     4
            gptid/17859654-f10b-11e2-a097-984be1087f8d  ONLINE       0     0     0

errors: No known data errors
 

aarjay

Dabbler
Joined
May 28, 2016
Messages
19
Please give details on how your two volumes, containing 3 drives each, are connected to the machine.
Hi Bigdave,
Thanks for your post.

Pool#1
3 x Seagate are seated in the HP disk controller (non hot swap)

Pool#2
1 x Hitachi seated in the HP disk controller (non hot swap). HP disk array has a capacity of 4
1 x Hitachi is connected directly to SATA port on motherboard
1 x Hitachi is connected directly to eSATA port using eSata to SATA cable
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
status: The pool metadata is corrupted.
It's toast. Unfortunately, there's not much that can be done if metadata corruption happens...

However, it seems odd that it should happen on an apparently healthy pool. Anything interesting happen lately?
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
Do i have any hope?
Maybe.
Discarded approximately 5 seconds of transactions.
cannot mount '/storageTank2': failed to create mountpoint
At this point, there's a chance that the corrupt metadata have been discarded. I would try exporting from the CLI and importing from the GUI.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Finally output of zpool status after import -fF
Code:
  pool: storageTank2
state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-9P
  scan: scrub repaired 0 in 0h0m with 0 errors on Sat May 28 22:46:57 2016
config:

        NAME                                            STATE     READ WRITE CKSUM
        storageTank2                                    ONLINE       0     0     0
          raidz1-0                                      ONLINE       0     0     0
            gptid/16035408-f10b-11e2-a097-984be1087f8d  ONLINE       0     0     0
            gptid/16c352dd-f10b-11e2-a097-984be1087f8d  ONLINE       0     0     4
            gptid/17859654-f10b-11e2-a097-984be1087f8d  ONLINE       0     0     0

errors: No known data errors
What on Earth...?

At this point, there's a chance that the corrupt metadata have been discarded. I would try exporting from the CLI and importing from the GUI.
Thing is, I don't see how it would get corrupted in the first place, without a sudden event like a power failure in the middle of a metadata write. Even then, ZFS should just happily grab one of the plentiful copies and fix things up.

I think things are nasty under the hood, but I don't have the expertise to make any intelligent suggestions on how to check if everything's ok, beyond the obvious scrub and SMART tests.
 

aarjay

Dabbler
Joined
May 28, 2016
Messages
19
It's toast. Unfortunately, there's not much that can be done if metadata corruption happens...

However, it seems odd that it should happen on an apparently healthy pool. Anything interesting happen lately?
Thanks Ericloewe, life has been been busy lately and I noticed that I wasn't really receiving those daily mails from Freenas in my inbox for a while now. So, not really sure when the pool went into degraded state. Feel bad about the whole situation. I understand the odds are stacked against it, though i am hoping if there is any chance I can get the pool mounted even if read only.
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
I think things are nasty under the hood
I think there's all kind of nasty here:
1 x Hitachi seated in the HP disk controller (non hot swap). HP disk array has a capacity of 4
1 x Hitachi is connected directly to SATA port on motherboard
1 x Hitachi is connected directly to eSATA port using eSata to SATA cable
I also think OP has little to lose at this point.

@aarjay, if by some chance you are able to mount the pool and get it backed up, it's time to rebuild your system the right way.
 

aarjay

Dabbler
Joined
May 28, 2016
Messages
19
What on Earth...?


without a sudden event like a power failure in the middle of a metadata write.

.
^ This is what i suspect could have happened. I maxed out the capacity of HP microserver to 6 x disk and 2 of those disk are powered over a molex bus.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I think there's all kind of nasty here:
Yeah, but it doesn't overlap with the issue. The other vdev, which would have me worry more, is fine (apparently!) and ZFS is perfectly capable of gracefully handling something like an excessively-long SATA cable (the eSATA one) without just going "The pool metadata is corrupted."
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
^ This is what i suspect could have happened. I maxed out the capacity of HP microserver to 6 x disk and 2 of those disk are powered over a molex bus.
Now we're getting somewhere. Can we get pictures of your setup, to have a clearer idea of what we're dealing with?
 

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
First off I would remove all three drives (of the functioning volume) from this machine ASAP, in case
this has been caused by a hardware failure. Once the tank1 is safe, connect the tank2 volume to the
machine without the use of eSATA and the mobo SATA ports. My hope is, you get lucky, but I'm not
holding out much hope. Good Luck!
 

aarjay

Dabbler
Joined
May 28, 2016
Messages
19
@aarjay, if by some chance you are able to mount the pool and get it backed up, it's time to rebuild your system the right way.
Robert, I agree. In hindsight i shouldn't have pushed for a setup like this. My focus now is on if I can somehow get the pool back up so i can recover data. I plan to build a new system with more thoughts on its robustness.
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
Status
Not open for further replies.
Top