8.0.3 Zpool issue

Status
Not open for further replies.

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,995
This posting is mostly for gcooper.

While attempting to auto-import my pool, which was actually listed, it failed stating a middleware problem had occured and to check my pool status.

Since I didn't collect any data at the time of the issue I'll rely on my memory best I can. "farm" is my pools name and it has several .

1) zpool status -v resulted in telling my pool was ONLINE and there were no problems.
2) zpool list resulted in no pool located.
3) zfs import farm resulted in telling me the pools was recently used by another system and there were no problems.
4) zfs import -f farm resulted in telling me it couldn't mount the many datasets I had. It also farmed my out to a web address but I didn't follow that.
5) After some internet searching I ended up having to type "zfs export farm".
6) The pool was not listed in the Auto-Import any longer.
7) I typed zfs export which resulted in farm now being listed in Auto-Import and I was actually able to import it fine.

Here is what my pool looks like right now, practically the same as before:
Code:
[root@freenas] ~# zpool status -v
  pool: farm
 state: ONLINE
 scrub: none requested
config:

        NAME                                            STATE     READ WRITE CKSUM
        farm                                            ONLINE       0     0     0
          raidz1                                        ONLINE       0     0     0
            gptid/4004da37-0d40-11e1-9d47-50e549b78964  ONLINE       0     0     0
            gptid/40310ae1-0d40-11e1-9d47-50e549b78964  ONLINE       0     0     0
            gptid/405d2497-0d40-11e1-9d47-50e549b78964  ONLINE       0     0     0
            gptid/408a1ff1-0d40-11e1-9d47-50e549b78964  ONLINE       0     0     0

errors: No known data errors

-----------------------------------------------------------------

[root@freenas] ~# zfs list
NAME           USED  AVAIL  REFER  MOUNTPOINT
farm          1.22T  3.96T  24.0M  /mnt/farm
farm/Madyson   163K  3.96T   163K  /mnt/farm/Madyson
farm/Mark      163K  3.96T   163K  /mnt/farm/Mark
farm/Rebecca   163K  3.96T   163K  /mnt/farm/Rebecca
farm/backups   735G  3.96T   735G  /mnt/farm/backups
farm/ftp       163K  10.0G   163K  /mnt/farm/ftp
farm/main      123G  3.96T   123G  /mnt/farm/main
farm/movies    372G  3.96T   372G  /mnt/farm/movies
farm/music     163K  3.96T   163K  /mnt/farm/music
farm/photos   22.5G  3.96T  22.5G  /mnt/farm/photos
[root@freenas] ~#

-----------------------------------------------------------------

[root@freenas] ~# df -h
Filesystem             Size    Used   Avail Capacity  Mounted on
/dev/ufs/FreeNASs2a    927M    529M    324M    62%    /
devfs                  1.0K    1.0K      0B   100%    /dev
/dev/md0               4.6M    1.8M    2.3M    44%    /etc
/dev/md1               824K    2.0K    756K     0%    /mnt
/dev/md2               149M     13M    124M    10%    /var
/dev/ufs/FreeNASs4      20M    651K     18M     3%    /data
farm                   4.0T     24M    4.0T     0%    /mnt/farm
farm/Madyson           4.0T    163K    4.0T     0%    /mnt/farm/Madyson
farm/Mark              4.0T    163K    4.0T     0%    /mnt/farm/Mark
farm/Rebecca           4.0T    163K    4.0T     0%    /mnt/farm/Rebecca
farm/backups           4.7T    735G    4.0T    15%    /mnt/farm/backups
farm/ftp                10G    163K     10G     0%    /mnt/farm/ftp
farm/main              4.1T    123G    4.0T     3%    /mnt/farm/main
farm/movies            4.3T    372G    4.0T     8%    /mnt/farm/movies
farm/music             4.0T    163K    4.0T     0%    /mnt/farm/music
farm/photos            4.0T     23G    4.0T     1%    /mnt/farm/photos
[root@freenas] ~#



Not sure what happened but during this entire process I had also tried 8.0.2-Release just in case my build was corrupt but could not mount the pool. This is the first time I'd tried 8.0.2 or .3 on my real NAS. Since I rotate my flash drives I always have the previous working version and I was able to install it and still access my pool without issue.

If there is something I can do, without destroying my pool for testing purposes just let me know the exact commands you want me to perform. I'm truly a novice when it comes to Linux/Unix but I'm learning, I just don't need to experience too much pain if I can help it. I do have all my data backed up elsewhere so recovery would only take a 30 hour+ day to copy the data back if I screw something up.

-Mark
 
G

gcooper

Guest
While attempting to auto-import my pool, which was actually listed, it failed stating a middleware problem had occured and to check my pool status.

Are tracebacks enabled in the GUI?

1) zpool status -v resulted in telling my pool was ONLINE and there were no problems.
2) zpool list resulted in no pool located.

?!?!?! I'd really like to see this.

3) zfs import farm resulted in telling me the pools was recently used by another system and there were no problems.

This won't work.

Code:
zpool import -R /mnt farm


should however.

4) zfs import -f farm resulted in telling me it couldn't mount the many datasets I had. It also farmed my out to a web address but I didn't follow that.
5) After some internet searching I ended up having to type "zfs export farm".
6) The pool was not listed in the Auto-Import any longer.
7) I typed zfs export which resulted in farm now being listed in Auto-Import and I was actually able to import it fine.

I'd really like to see screenshots or blurbs that demonstrate the whole process. My bet is that you actually imported it to another machine by accident, or your hostid (highly unlikely because it's generated from the host's UUID)/zpool.cache (more likely) file isn't staying consistent between boots.

Not sure what happened but during this entire process I had also tried 8.0.2-Release just in case my build was corrupt but could not mount the pool. This is the first time I'd tried 8.0.2 or .3 on my real NAS. Since I rotate my flash drives I always have the previous working version and I was able to install it and still access my pool without issue.

Hmmm... interesting. Do you share the flash drives between different machines?

If there is something I can do, without destroying my pool for testing purposes just let me know the exact commands you want me to perform. I'm truly a novice when it comes to Linux/Unix but I'm learning, I just don't need to experience too much pain if I can help it. I do have all my data backed up elsewhere so recovery would only take a 30 hour+ day to copy the data back if I screw something up.

Err... let's not go down that painful path :).
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,995
If I run into this problem again, I will completely document it but lets hope I never see it again.
 
G

gcooper

Guest
Hmmm... ok, I guess we can put this on pause for now, but I'd really like to get more info about this issue later. My gut reaction is that something went south with the hostid/zpool.cache.

BTW, you don't need to reimport your zpool if you're upgrading FreeNAS.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,995
When I upgraded it was from an old trunk build from like 6 months ago. I still have that version on my main USB flash drive. I could revert back to it. I had to do a complete reinstall due to the database conflicts. Do you think it could have been something to do with the fact I have no swap space? It's set to "0". As I said, if I run into it again I will document the crap out of it. If you have a list of standard commands I should run, I'll do that. Maybe I'll try to recreate the failure tomorrow. I have everything on another NAS, it's just a slow process restoring all the files.
 
Status
Not open for further replies.
Top