How long should an import check take?

Status
Not open for further replies.

technopop

Dabbler
Joined
Sep 14, 2015
Messages
32
On Monday feb 1st, I got notification that da0, the flash drive experienced some corruption and was showing up with an error. I waited 'til Tuesday (yesterday) to do something about it.

I looked up and found others who had that problem simply did a backup of the config, installed a fresh copy of freenas and upload the backup config to solve the problem.

I did that and on the reboot, it seemed to hang on the zfs import. No timer incrementing as in another post I saw.

I waited around 12 hours and hit the reset button, booted back into an initial install and am playing around with it now. The GUI import is hanging at step 2 and doing anything in other screens results in a database locked message.

From CLI, zpool import showed the pool is there. Importing from CLI, I gave it a half hour before hitting reset. After the reboot, it showed up in zpool status with a 34 hour scrub with 0 errors and 0 repairs on Jan 31st.

I couldn't see the drive in the GUI. I did a zpool export from CLI and then tried to import in the GUI.

It is currently still in step 2. I don't see any drive activity in the Reporting window. TOP shows the system being idle. zpool is in there with a blank status.

While step 2 is happening in the GUI, zpool status shows the pool, state ONLINE.

A friend tells me this is a sanity check going on but wasn't sure how long it would take.

There is about 7TB of data in this pool.

Do I just wait this out?

FreeNAS 9.3 stable
Supermicro X9SCM-F
core i3 3240
Kingston Data Traveler 8GB USB flash boot drive
6 x 4TB WD Red (raid z2) using on board SATA connectors.
16GB ECC RAM
 
Last edited:
D

dlavigne

Guest
Post the full output of zpool status using Insert -> Code.
 

technopop

Dabbler
Joined
Sep 14, 2015
Messages
32
The step 2 window is still running in a browser from yesterday just before my original post.

Here is the output now.

There are no volumes listed under the storage tab.

Code:
root@freenas] ~# zpool status
  pool: freenas-boot
state: ONLINE
  scan: none requested
config:

    NAME                                          STATE     READ WRITE CKSUM
    freenas-boot                                  ONLINE       0     0     0
      gptid/6b20c365-c9f8-11e5-8bb0-002590d2d201  ONLINE       0     0     0

errors: No known data errors

  pool: homeserver
state: ONLINE
  scan: scrub repaired 0 in 34h40m with 0 errors on Sun Jan 31 08:40:46 2016
config:

    NAME                                            STATE     READ WRITE CKSUM
    homeserver                                      ONLINE       0     0     0
      raidz2-0                                      ONLINE       0     0     0
        gptid/688aade6-7e92-11e4-8542-002590d2d201  ONLINE       0     0     0
        gptid/2984bcf3-7fe2-11e4-8542-002590d2d201  ONLINE       0     0     0
        gptid/745e2abc-7e2c-11e4-a52e-002590d2d201  ONLINE       0     0     0
        gptid/f75983af-7fe2-11e4-8542-002590d2d201  ONLINE       0     0     0
        gptid/ee1a9c1d-7e92-11e4-8542-002590d2d201  ONLINE       0     0     0
        gptid/10f62186-7e2e-11e4-a52e-002590d2d201  ONLINE       0     0     0

errors: No known data errors
[root@freenas] ~#
 
D

dlavigne

Guest
Does the volume show in Storage ‣ Volumes ‣ View Volumes?
 

technopop

Dabbler
Joined
Sep 14, 2015
Messages
32
In my mind, this shouldn't be a factor but it's still something different. Would the version of FreeNAS matter?

The zpool was last mounted and used in FreeNAS-9.3-STABLE-201601181840.

The version that I'm trying to mount it in is FreeNAS-9.3-STABLE-201602020212.
 

technopop

Dabbler
Joined
Sep 14, 2015
Messages
32
One more detail... maybe important.

The array started life as a 6 x 2TB WD Green drives. They started to fail and I replaced them one by one with 4TB Red.

I ran the commands to allow the volume to expand and take up the new space when all drives were replaced and re-silvered.

However, the datasets within seemed to not have expanded despite me setting that they should by editing the dataset properties and changing the quotas to a higher number.

I saw this from the perspective of Windows Explorer, Apple Finder and the Storage tab saying the various datasets/afp shares/cifs shares were still limited to whatever max I had set it to instead of the new quota.
 

technopop

Dabbler
Joined
Sep 14, 2015
Messages
32
I've fallen back to mounting the volume read only and copying files off of the system and will rebuild and restore after it's done. I'll also be installing a second flash drive to mirror the boot.

I've also added an update to the following thread for getting into recovery mode in 9.3.

https://forums.freenas.org/index.php?threads/zfs-has-failed-you.11951/#post-262642

I've got three more FreeNAS systems deployed with a customer. Cross fingers I don't run into this again.
 
Status
Not open for further replies.
Top