ZFS Kernel Panic.

Status
Not open for further replies.

praecorloth

Contributor
Joined
Jun 2, 2011
Messages
159
Hey guys.

I've got a FreeNAS box running 8.0.4 64bit. It's got a single ~1TB RAIDZ2. Basically the system kernel panic'd and would continue to kernel panic upon reboot. (Kernel panic details in a moment). The problem was that the FreeNAS install would go through and try to mount the borked volume. Fresh install of FreeNAS would boot properly, go to auto import, kernel panic.

We then did an import from the command line.

Code:
# zpool import -f HubZ2
panic: solairs assert: ss == NULL, file: /build/home/jpaetzel/8.2.0/FreeBSD/src/sys/modules/zfs/../../cddl/contrib/opensolaris/ufs/common/fs/zfs/space_map.c, line:109
cpuid = 0
Uptime: 6m43s
Physical memory: 4038 MB
Dumping 278 MB:


And that's where it ends. We've been playing with this a bit and can't figure anything out. We had found a blog by someone who was in a similar issue, and they listed out how they used zdb to fix the problem. Which we tried, though I can't find the page right now to tell you what was tried, but it didn't work anyways. We're pretty stuck here. Any thoughts? I'm attaching a pic of the screen when we got the kernel panic, just in case.

Here is the zdb command we tried.
Code:
zdb -e -bcsvL HubZ2


That took some time to run, but we're still at the same point, unfortunately.
 

Attachments

  • IMG_20120702_195225.jpg
    IMG_20120702_195225.jpg
    87 KB · Views: 407

William Grzybowski

Wizard
iXsystems
Joined
May 27, 2011
Messages
1,754
Two things come to my mind...

It might be panicing because for not enough memory to allocate pages or because a bug in ZFS.
So you may try to upgrade RAM temporarily or use a newer ZFS version (FreeBSD/PC-BSD 9 with v28)
 

praecorloth

Contributor
Joined
Jun 2, 2011
Messages
159
Oh, my thread is moved. Alright, so here's what we know at this point. We're giving up on trying to recover. We're restoring from backup, which makes me sadface. But we did end up trying the latest FreeBSD, which also kernel panic'd. As for memory, we've got at least 4GB in there at the moment. It's a 1TB pool, but with only about 80GB of stuff on it.

What really makes me sad is that this thread got moved over to the bugs section. Unfortunately this happened on a production box, so we couldn't play around with it for too long. If it were to happen on a testing box, I'd say leave that box down so the devs can have something to look at. :)

Another thing I'm going to mention in case anyone comes looking in the future. This FreeNAS box was acting as an iSCSI target for ESXi 5 update 1. There has been a LOT of crap floating around regarding ESXi 5 and software iSCSI targets. Given the other headaches we've had with ESXi, plus all of the FreeNAS/OpenFiler/OMV iSCSI + ESXi 5 threads out there, I'm comfortable that this is a communication issue related to ESXi 5's initiator. I can't wait for our transition to ProxMox.
 

praecorloth

Contributor
Joined
Jun 2, 2011
Messages
159
Ooohhhh new information. And apparently we have the machine to play with for a little while longer now, too.

Code:
# zdb -e -c HubZ2


Returns

Code:
Traversing all blocks to verify metadata check sums and nothing leaked...


We then hit

Code:
Assertion failed: (ss->ss_end >= end (0x3564ae0000 >= 0x3564afe000))  ... [path to file that is up in the kernel panic] ... space_map.c, line 174
Abort


This smacks to me of a file system that has its start and end points screwed up. I unfortunately do not know how to fix this in ZFS. We tried to load the zpool in the latest FreeBSD with ZFS v28, but it failed saying that it couldn't find the file 'HubZ2'. Add more memory isn't an option at this point either, as the system is at its max. We may try to load the drives in to another machine that can support more memory, but that's a bit of an endeavor so we'd like to hold off on that if at all possible.
 
Status
Not open for further replies.
Top