ZFS Panic - cannot import

Status
Not open for further replies.

jnruiz

Cadet
Joined
Jun 23, 2012
Messages
7
HI, so i have this freenas 8 box working inside a Vmware ESXi 5; i have 5 virtual drives shared; when i tried to open some of the shares from a Windows 7 Machine the Virtual Freenas suddenly rebooted, it happened several times after i figured out the Win7 cifs browser was the trigger for spontaneous reboots but the damage was already done, freenas VM couldn't boot normally and i started to debug it, turns out one of the Virtual ZFS hard drives went corrupted; i detached the corrupted Virtual HD and i am still trying to fix it in a "test" freenas (another VM); problem is i cannot autoimport it (message Error: [MiddlewareError: The volume "jruiz" failed to import, for futher details check pool status] shows up), then i start reading this forum, protosd blogs, even Sun/Oracle zfs manuals to get the "zpool import -f"command to work.
I have the pool name, even the pool id but when i try to import it with the "-f" option i get this message before the Virtual Machine reboots itself:

Fatal trap 18: integer divide fault while in kernel mode
cpuid = 0; apic id = 00
instruction pointer = ***** (some hex here)
stack pointer = *****
frame pointer = *****
code segment = base 0x0, limit 0xfffff, type 0x1b
= DPL 0, pres 1, long 1, def32 0, gran 1
processor eflags = interrupt enabled, resume, IOPL = 0
current process = 2451 (txg_thread_enter)
trap number = 18
panic: integer divide fault
cpuid = 0
Uptime: 17m 55s
Cannot dump. Device not defined or unavailable

camcontrol devlist shows both drives (da0 with 8 gb for installation and da1 with 500 gb for files)

gpart shows thre parts for da1, free (47 k), freebsd-swap (2.0G) and freebsd-zfs (498G).

I already "cloned" this drive in order to test everything but right now i'm out of ideas, i'm not a zfs/hardrive partitioning systems genius, just the basics.

Which commands could i try to fix this? is it a bug of freenas/vmware combo? am i toasted (or my files)?

Thanks for your help
 

survive

Behold the Wumpus
Moderator
Joined
May 28, 2011
Messages
875
Hi jnruiz,

How much RAM do you have allocated to the VM? It could be possible that while you have enough RAM to run ZFS you don't have enough allocated to import the improperly shut-down pool.

-Will
 

jnruiz

Cadet
Joined
Jun 23, 2012
Messages
7
Tried

Hi survive, Thanks for replying, it was running on 8 GB VM, test machine was 4 GB, i just upgraded to 16 GB (should it be enough?) and got the same results. Physical server (esxi hosts) has 96 GB Real RAM, i just powered off several other vms to be sure physical ram is available and got the same results. It's freenas 8.04 64 bits i'm running (i didn't say it before). any other suggestions? i'm open to any advice . Thanks again.
 

survive

Behold the Wumpus
Moderator
Joined
May 28, 2011
Messages
875
Hi jnruiz,

If you can make copies of the vdisks to test with I would try booting one of these disks:

http://mfsbsd.vx.sk/

and see if you have better luck importing your pool on the test vdisk.

-Will
 

paleoN

Wizard
Joined
Apr 22, 2012
Messages
1,403
What exactly was your pool configuration and what layer is ZFS running on? I'm confused about the five drives only da1.
 

jnruiz

Cadet
Joined
Jun 23, 2012
Messages
7
Import successfull with martin matuska image

Survive, i just downloaded the 9 version (64 bits), import proccess was successfull (at least it didnt crash the VM), it is on "scrub in progress" in the scan status (zpool status -v); should i just wait for scrub to finish and then what?.

Paleon, it is a one virtual drive only (no raid, no need because it's a virtual drive already on physical raid from SAN), 5 other disks were working on the original freenas installation, i took this one out to another VM with freenas to fix it, the other four (independent from the broken one) are working flawless.

Thanks for your help, really appreciate it.
 

survive

Behold the Wumpus
Moderator
Joined
May 28, 2011
Messages
875
Hi jnruiz,

If it's working on something *don't touch it*!

Let it finish the scrub.

-Will
 

paleoN

Wizard
Joined
Apr 22, 2012
Messages
1,403
Paleon, it is a one virtual drive only (no raid, no need because it's a virtual drive already on physical raid from SAN), 5 other disks were working on the original freenas installation, i took this one out to another VM with freenas to fix it, the other four (independent from the broken one) are working flawless.
And yet if you had a mirror or 3 virtual drive raidz1 zpool you likely wouldn't need to recover the pool at all. Unless 2 virtual drives were corrupted at the same time. If you have the space and are so inclined you can turn any/all of the 1 virtual drive zpools into mirrored zpools.
 

jnruiz

Cadet
Joined
Jun 23, 2012
Messages
7
No luck

Bad luck, at some point on the scrub process (i left it on 25%) the VM with Martin Matuska freebsd version rebooted itself; when i came back i tried again with "zpool import -f jruiz" after logged in and it crash and boots again, not before showing some message like the one from freenas (FAtal trap 18: etc etc); i am downloading freenas 7 and martin 8 freebsd version to do the same procedure. am i running out of options? :(

-----------------------
P.S. (15 minutes after) same results with Martin version 8.3 and freenas 7 says it is formatted on a newer version and wouldn't let me import it.
 

survive

Behold the Wumpus
Moderator
Joined
May 28, 2011
Messages
875
Hi jnruiz,

Just curious....have you memtested this system? If you can grab an .iso from memtest.org & let it run overnight. It could be that your base system is dodgy in some way and your efforts importing the pool is aggravating some underlying problem.

-Will
 

jnruiz

Cadet
Joined
Jun 23, 2012
Messages
7
survive, no i haven't memtested, it's running over a Vmware ESXi 5 with several other vms (windows server, win7,winxp,suse,centos,ubuntu) so i thought it won't be a problem, either way i just vmotion the VM to a different server (same architecture with 90 GB ram) and got the same results; at least i had a relief when i try to "scrub" the pool (before anything), it showed me some filenames from the data stored on the drive; the problem is it just reboots again (while scrubbing) when it reachs about 16% (i cannot tell when exactly but it did it the last three times) even from freenas or from Martin release. I'm becoming an expert on ZFS commands but no so much on troublehooting, ja ja. any ideas on how to get this thing to work?.
 

jnruiz

Cadet
Joined
Jun 23, 2012
Messages
7
Hi, with the Martin freebsd version i managed to import the zfs volume as readonly (zpool import -f -o readonly=on poolname); i can see my files :) , but how can i share them through CIFS or NFS with this small freebsd version? ; i tried to import the zfs pool as read only with freenas but it won't let me (no readonly option). any lights on this? Thanks as always.
 

survive

Behold the Wumpus
Moderator
Joined
May 28, 2011
Messages
875

jnruiz

Cadet
Joined
Jun 23, 2012
Messages
7
Happy Ending

Thanks survive, i think that's a job for next weekend (Saturday morning: NFS on Freebsd, ja ja), right now i'm doing it through an ftp Daemon already included, just typed "/etc/rc.d/ftpd onestart" and is up and running, i built a new "virtual drive" replicating to a physical drive (just in case) and files are now being transferred from the broken one with Filezilla, that way i guess i'll find the files that were stopping the scrub job. I'll hope this is my last post for this thread. Thanks to everyone. :D
 
Status
Not open for further replies.
Top