Help with zpool import: failed to create mountpoint - FreeNas 8.3.2

Status
Not open for further replies.

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194

aarjay

Dabbler
Joined
May 28, 2016
Messages
19
First off I would remove all three drives (of the functioning volume) from this machine ASAP, in case
this has been caused by a hardware failure. Once the tank1 is safe, connect the tank2 volume to the
machine without the use of eSATA and the mobo SATA ports. My hope is, you get lucky, but I'm not
holding out much hope. Good Luck!
Hi BigDave,
Question: if I remove the 3xSeagate (part of good pool and also have rsync backup for them) and place the 3xHitachi in there, the change in dev/ada number will have any impact? or Freenas use some other reference ID?
 

aarjay

Dabbler
Joined
May 28, 2016
Messages
19
It's all on the dubious-sounding controller.
Hi Eric, that disk cage is the default offering from HP microserver not sure what its called - Sata disk controller? and i would have thought there are no issues there
 

aarjay

Dabbler
Joined
May 28, 2016
Messages
19
ZFS uses its own labels, so the device number doesn't matter.
Ok, so this is what I'll give a try
1. shutdown the HP
2. Disconnect all drives
3. Seat the 3x hitachi in HP built in controller
4. Reboot

Curious to know what will be the status of tank1? once all the drives in tank 1 are disconnected.

5. Detach tank2 through UI
6. Auto-import through UI (this failed last time)

Is this correct? or i missed any thing

Edit: Thanks for your prompt response guys. Its past 2 AM here, I have now safely shutdown the machine and will move the drives first thing tomorrow morning. Appreciate your time to add any further feedback to this thread. hoping if with a bit of luck and support from this forum I can get the tank2 pool up and running to a position to recover data.
 
Last edited:

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
Curious to know what will be the status of tank1? once all the drives in tank 1 are disconnected.
You could go ahead and use the gui to detach the Tank1 volume (please reference the manual for this),
but this is not absolutely necessary.

Pay close attention to ESD best practices when handling the drives., you don't need more issues at this point.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Were you able to get your data back?
 

aarjay

Dabbler
Joined
May 28, 2016
Messages
19
Were you able to get your data back?
Apologies to all. I was meant to keep the thread updated. The machine is in a shutdown state and I plan to work on Robert and BigDave's advise over the weekend (sat morning my time).
Edit: Hi @cyberjock, considering i dont have a full rsync backup of tank2, I was too worried and concerned at that stage to try anything out in a rush. So I decide to shut down the machine and come back with a plan based on advice from this forum.
 
Last edited:

aarjay

Dabbler
Joined
May 28, 2016
Messages
19
Hi @BigDave, @cyberjock, @Robert Trevellyan. Some positive news and update.

As advised, I carried out the following tasks:
- Detach tank1 through UI. I noticed export for tank2 was not required as zfspool import command returned tank2 in the list
- Shutdown and safely removed all drives from the system and placed 3xHitachi into the HP controller
- Upon successful boot, I confirmed all disks were recognised by OS using dmesg, Freenas UI showed all disks as online, and zfspool import retuned tank2 in the list
- I then used 'auto import' from UI and was able to successfully import tank2 !
- At this stage, I created the AFP/CIFS shares to copy my data back to remote location

However, I also noticed the following on UI
WARNING: The volume storageTank2 (ZFS) status is UNKNOWN: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected.Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or replace the device with 'zpool replace'.

Please note, I am yet to run scrub after the import as I am currently waiting on all rsync jobs to complete. I suspect one of the drives may be failing or have data issues. I'll leave the server running overnight and come back tomorrow morning to investigate further. What you guys suggest I should do - scrub followed by zpool clear?


Code:
[root@freenas] /mnt/storageTank2/Backup# zpool status -v
  pool: storageTank2
state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-9P
  scan: scrub repaired 0 in 0h0m with 0 errors on Sat May 28 22:46:57 2016
config:

        NAME                                            STATE     READ WRITE CKSUM
        storageTank2                                    ONLINE       0     0     0
          raidz1-0                                      ONLINE       0     0     0
            gptid/16035408-f10b-11e2-a097-984be1087f8d  ONLINE       0     0     0
            gptid/16c352dd-f10b-11e2-a097-984be1087f8d  ONLINE       0     0   153
            gptid/17859654-f10b-11e2-a097-984be1087f8d  ONLINE       0     0    24

errors: No known data errors
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
What you guys suggest I should do
I would leave well alone until you've recovered as much data as possible. If you start scrubbing now it will slow down your data transfer. Then by all means try scrubbing and clearing and SMART tests.

The ultimate goal will be to identify and replace any failing disks, then rebuild (or replace) the system more along recommended lines.
 

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
As advised, I carried out the following tasks:
- Detach tank1 through UI. I noticed export for tank2 was not required as zfspool import command returned tank2 in the list
- Shutdown and safely removed all drives from the system and placed 3xHitachi into the HP controller
- Upon successful boot, I confirmed all disks were recognised by OS using dmesg, Freenas UI showed all disks as online, and zfspool import retuned tank2 in the list
- I then used 'auto import' from UI and was able to successfully import tank2 !
- At this stage, I created the AFP/CIFS shares to copy my data back to remote location
That's awesome! Glad to hear this worked for you. Kudos for keeping your cool.
 
Last edited:

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
Glad to hear things worked for you.

Since the problem pool used the ODD and eSATA connections, are you running one of the hacked BIOS' on the server? If not, those connections use an IDE emulation. To get AHCI, you need an alternative BIOS.

Some positive news and update.
 

aarjay

Dabbler
Joined
May 28, 2016
Messages
19
That's awesome! Glad to hear this worked for you. Kudos for keeping your cool.
And thanks for your advice @BigDave and @Robert Trevellyan in this post.

The ultimate goal will be to identify and replace any failing disks, then rebuild (or replace)
Hi Robert, I plan to run scrub after successful completion of backup jobs. I am keen to diagnose the issue and this may take few more days due to time constraints at my end. I will keep you guys posted.

Glad to hear things worked for you.
Since the problem pool used the ODD and eSATA connections, are you running one of the hacked BIOS' on the server? If not, those connections use an IDE emulation. To get AHCI, you need an alternative BIOS.
Thanks @gpsguy My N36L is almost 5 years old now. From memory, I think i updated the BIOS to enable AHCI for drive #4 & #5. Still, its a good reminder, will go into BIOS and confirm once I have the chance to shutdown the server again.
 
Last edited:

aarjay

Dabbler
Joined
May 28, 2016
Messages
19
Further update, I manage to backup data to remote disks and then ran the scrub.

Last status after zpool import
Code:
[root@freenas] /etc# zpool status
  pool: storageTank2
state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-9P
  scan: scrub repaired 0 in 0h0m with 0 errors on Sat May 28 22:46:57 2016
config:

        NAME                                            STATE     READ WRITE CKS                                   UM
        storageTank2                                    ONLINE       0     0                                        0
          raidz1-0                                      ONLINE       0     0                                        0
            gptid/16035408-f10b-11e2-a097-984be1087f8d  ONLINE       0     0                                        0
            gptid/16c352dd-f10b-11e2-a097-984be1087f8d  ONLINE       0     0 15.                                   7K
            gptid/17859654-f10b-11e2-a097-984be1087f8d  ONLINE       0     0   2                                   42


Latest status after scrub
Code:
[root@freenas] /etc# zpool scrub storageTank2
[root@freenas] /etc# zpool status -v
  pool: storageTank2
state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-9P
  scan: scrub repaired 14.0G in 3h29m with 0 errors on Fri Jun 10 23:09:43 2016
config:

        NAME                                            STATE     READ WRITE CKSUM
        storageTank2                                    ONLINE       0     0     0
          raidz1-0                                      ONLINE       0     0     0
            gptid/16035408-f10b-11e2-a097-984be1087f8d  ONLINE       0     0     0
            gptid/16c352dd-f10b-11e2-a097-984be1087f8d  ONLINE       0     0  246K
            gptid/17859654-f10b-11e2-a097-984be1087f8d  ONLINE       0     0 1.46K

errors: No known data errors


Finally, I decide to apply zpool clear
Code:
[root@freenas] /etc# zpool clear storageTank2
[root@freenas] /etc# zpool status -v
  pool: storageTank2
state: ONLINE
  scan: scrub repaired 14.0G in 3h29m with 0 errors on Fri Jun 10 23:09:43 2016
config:

        NAME                                            STATE     READ WRITE CKSUM
        storageTank2                                    ONLINE       0     0     0
          raidz1-0                                      ONLINE       0     0     0
            gptid/16035408-f10b-11e2-a097-984be1087f8d  ONLINE       0     0     0
            gptid/16c352dd-f10b-11e2-a097-984be1087f8d  ONLINE       0     0     0
            gptid/17859654-f10b-11e2-a097-984be1087f8d  ONLINE       0     0     0

errors: No known data errors


Question: I am inclined to run a thorough SMART test on the disks if it is of any worth. What are the instructions? thanks again.
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
For a long test, use the syntax below, where X is a number. Note, sometimes the device names are just daX, but I believe yours are adaX. It will take several hours to run.

smartctl -t long /dev/adaX
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
I am inclined to run a thorough SMART test on the disks if it is of any worth.
You should schedule regular short and extended SMART tests as part of proper care and feeding of any FreeNAS system. Combined with SMART status checks and working email notifications, they are likely to give you early warning of disk failure.
 
Status
Not open for further replies.
Top