ZFS volume error after upgrading from 8.0.1 release to 8.0.2

Status
Not open for further replies.

JeremyEm

Cadet
Joined
Sep 24, 2011
Messages
6
I have a new FreeNAS box that I just built and have been running in a test environment. Last night I upgraded from 8.0.1 release to 8.0.2 and afterwards, I have an alert and my ZFS volume is hosed up.

I have a 5 drive, 7.1TB volume setup that had about 500GB of data on it (all replicated elsewhere for now). The attached image shows my error (sorry for the tiny size of the image).

Screen shot 2011-10-18 at 6.50.10 AM.jpg

I'm new to FreeNAS and have very little BSD experience, so I'm not sure what to do to resolve this.
 

JeremyEm

Cadet
Joined
Sep 24, 2011
Messages
6
[root@FIREBALL2] ~# zpool status
no pools available
[root@FIREBALL2] ~# zpool import
pool: Test
id: 16946581391153097148
state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
see: http://www.sun.com/msg/ZFS-8000-3C
config:

Test UNAVAIL insufficient replicas
raidz2 UNAVAIL insufficient replicas
ada0p2 UNAVAIL cannot open
ada1p2 UNAVAIL cannot open
ada2p2 UNAVAIL cannot open
ada3p2 UNAVAIL cannot open
ada4p2 ONLINE

pool: Test
id: 14556436343218882027
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

Test ONLINE
raidz1 ONLINE
ada0p2 ONLINE
ada1p2 ONLINE
ada2p2 ONLINE
ada3p2 ONLINE
ada5p2 ONLINE
[root@FIREBALL2] ~#
 

JeremyEm

Cadet
Joined
Sep 24, 2011
Messages
6
so I can run "zpool import 14556436343218882027" and it imports and clears the alert message, but the volume still shows the message

"Test /mnt/Test None (Error) Error getting available space Error getting total space"

When I did the import command, it did report back

"cannot mount '/Test': failed to create mountpoint"

If I do "zpool status" I now get

===========================================
[root@FIREBALL2] ~# zpool status
pool: Test
state: ONLINE
scrub: scrub in progress for 0h18m, 64.64% done, 0h10m to go
config:

NAME STATE READ WRITE CKSUM
Test ONLINE 0 0 0
raidz1 ONLINE 0 0 0
ada0p2 ONLINE 0 0 0
ada1p2 ONLINE 0 0 0
ada2p2 ONLINE 0 0 0
ada3p2 ONLINE 0 0 0
ada5p2 ONLINE 0 0 0

errors: No known data errors
===========================================

It's scrubbing because I told it to so I could see if other commands worked against the volume.

I tried rebooting after doing this once and after the reboot it came up with no volumes and the same thing that started after the upgrade.

I don't want to run the "zpool destroy" command because it sees both the raidz1 and raidz2 as "Test." I have no idea where the raidz2 came from. I have made several test volumes, but all of them were raidz1.
 

JeremyEm

Cadet
Joined
Sep 24, 2011
Messages
6
I gave up worrying about it. It looks like it was some old data on the drive that was from some previous testing. Still not sure why it thought it was raidz2.

I ran "dd if=/dev/zero of=/dev/ada0 bs=1M count=10" (changing ada0 to match each drive) to zero out the beginning of each drive and then verified there were no stale volumes. I then rebuilt a new volume (the old one was a test one with junk data on it).

I'm dumping a few hundred gigs of data to it and tomorrow I will reboot it to make sure it all comes up ok and intact. For a little while this will be a test environment anyways with data being replicated from my home production NAS.
 
Status
Not open for further replies.
Top