z2 raid volume corrupted on FreeNAS-8.3.1

Status
Not open for further replies.

littlezeus11

Dabbler
Joined
Sep 20, 2013
Messages
17
Hi everyone,

I really need your help, because I've got a serious issue. I've just had a kernel panic on my freenas 8.3.1 x64, so i had to reboot our (big) storage server. Once the server rebooted, it was impossible for us to mount the zfs anymore and the 'root' appears with the following error :

racine /mnt/racine 0 (Error) Error while attempting to get the free available space Error while attempting to get the total available space UNKNOWN

The cleaning attempts don't work and the following log appears :


Environment:

Software Version: FreeNAS-8.3.1-RELEASE-p2-x64 (r12686+b770da6_dirty)
Request Method: GET
Request URL: http://192.168.50.42/storage/scrub/1/


Traceback:
File "/usr/local/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response
105. response = middleware_method(request, callback, callback_args, callback_kwargs)
File "/usr/local/www/freenasUI/../freenasUI/freeadmin/middleware.py" in process_view
166. return login_required(view_func)(request, *view_args, **view_kwargs)
File "/usr/local/lib/python2.7/site-packages/django/contrib/auth/decorators.py" in _wrapped_view
20. return view_func(request, *args, **kwargs)
File "/usr/local/www/freenasUI/../freenasUI/storage/views.py" in zpool_scrub
810. pool = notifier().zpool_parse(volume.vol_name)
File "/usr/local/www/freenasUI/../freenasUI/middleware/notifier.py" in zpool_parse
3758. parse = zfs.parse_status(name, doc, res)
File "/usr/local/www/freenasUI/../freenasUI/middleware/zfs.py" in parse_status
661. status = data.split('config:')[1]

Exception Type: IndexError at /storage/scrub/1/
Exception Value: list index out of range


All of the 8 hard drives are seen by freenas and their status is 'ok', but the z2 raid volume seems corrupted.

Our server: freenas 8.3.1 x64 core i5, eight 3To caviar red hard drives in raid-z2 configuration.

Thanks for any help you could give. We really need the important data on this nas.
 

warri

Guru
Joined
Jun 6, 2011
Messages
1,193
Can you also post the server hardware specs?
Whats the output of "zpool import" issued from the CLI?
 

littlezeus11

Dabbler
Joined
Sep 20, 2013
Messages
17
thanks for reply!
the server spec are:
Version de FreeNAS-8.3.1-RELEASE-p2-x64 (r12686+b770da6_dirty)
Plate-forme Intel(R) Core(TM) i5-3470 CPU @ 3.20GHz
Mémoire 16216MB


the zpool import CLI "works" but gives no response. While the kernel is booting, GEOM gives this message : "GEOM : mfisyspd1: corrupt or invalid gpt detected" and "gpt rejected--may not be recoverable". We have this same message for all of the eight hard drives.
 

warri

Guru
Joined
Jun 6, 2011
Messages
1,193
Uhm, sorry can't help you any more then. Some other threads involve GPT recovery, maybe some more experienced users can help you out.

Probably the outputs of those commands can help:
Code:
camcontrol devlist
gpart show
glabel status
 

ikue1966

Cadet
Joined
Sep 24, 2013
Messages
6
Hi all - the same here:
Environment:
Software Version: FreeNAS-8.2.0-RELEASE-p1-x64 (r11950)
Request Method: GET
Request URL: http://192.168.2.150/storage/volume/zfs-edit/1/
Traceback:
File "/usr/local/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response
105. response = middleware_method(request, callback, callback_args, callback_kwargs)
File "/usr/local/www/freenasUI/freeadmin/middleware.py" in process_view
166. return login_required(view_func)(request, *view_args, **view_kwargs)
File "/usr/local/lib/python2.7/site-packages/django/contrib/auth/decorators.py" in _wrapped_view
23. return view_func(request, *args, **kwargs)
File "/usr/local/www/freenasUI/storage/views.py" in zfsvolume_edit
513. volume_form = forms.ZFSVolume_EditForm(mp=mp)
File "/usr/local/www/freenasUI/storage/forms.py" in __init__
820. self.fields['volume_compression'].initial = data['compression']
Exception Type: KeyError at /storage/volume/zfs-edit/1/
Exception Value: 'compression'
My assumption is that while I am booting from memory stick, the OS is corrupt.
Any further suggestions?
 

warri

Guru
Joined
Jun 6, 2011
Messages
1,193
ikue, your issue seems completely different to me.
Reinstall FreeNAS on a new USB Stick, restore your config and try booting again. Also upgrade to a newer Version (either 8.3.1-p2 or 9.1.1).
 

littlezeus11

Dabbler
Joined
Sep 20, 2013
Messages
17
hello,
the problem seems to be only the gpt header and the gpt table.
When i open the different harddrive with partd magic live cd the partition editor say that the main gpt is corrupt but the gpt backup seem to be correct.
And this tool can show me 3 partition 1 of 1 MB 1 of 127 MB and the last one of 2.7TB (this tool automatically use the gpt backup).
can somebody help me to fix the gpt problem?
Id try to rebuild the main GPT header and table with gpart (gpt fdisk by rod smith) but i dont really understand how it works and and dont want to corrupt data.

thanks for your help!
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
You shouldn't have 3 partitions on your drives... you should have either 1(if you disabled swap) or 2(if you left swap enabled, which is default).

And warri gave a few commands, but you haven't provided their output.....
 

littlezeus11

Dabbler
Joined
Sep 20, 2013
Messages
17
hello thanks for reply,
my freenas (8.3.1) is installed on a flash drive and the 8 hard drives only contains the data in raid z2. perhaps it can explain the 3 partitions?
Have you got any idea to fix my problem? can i use freenas to recover the gpt backup?
 

littlezeus11

Dabbler
Joined
Sep 20, 2013
Messages
17
hello,
this command line works for one of my 8 hard drive, but all the other GPT seems to be not recoverable.
All the hdd are identical (western digital caviar red 3To), so is it possible to copy the good GPT header and table and to apply this one on the other hard drive?
Is there some difference beetween the partition of different hard drive in the same raid z2?
 

littlezeus11

Dabbler
Joined
Sep 20, 2013
Messages
17
if i'm not wrong the mbr of my other 7 hdd are corrupted that why gpart can't recover it.
have you got an idea to fix the mbr?

thank you for the help you can give me!
 

littlezeus11

Dabbler
Joined
Sep 20, 2013
Messages
17
gpart show only show me the disk mfisyspd6 it indicated that it is corrupt but the command line gpart recover mfisyspd6 had work and now the status of this disk is "ok".
gpart show and gpart list don't show me the other hard drive, but they are present in the GUI interface.
 

Dusan

Guru
Joined
Jan 29, 2013
Messages
1,165
Can you please provide (paste here) the output of (we already asked 3 times!, it's hard to help you without having any data):
gpart show
 

littlezeus11

Dabbler
Joined
Sep 20, 2013
Messages
17
[root@nas2 ~]# gpart show => 63 15224769 da0 MBR (7.3G) 63 1930257 1 freebsd [active] (942M) 1930320 63 - free - (31k) 1930383 1930257 2 freebsd (942M) 3860640 3024 3 freebsd (1.5M) 3863664 41328 4 freebsd (20M) 3904992 11319840 - free - (5.4G) => 0 1930257 da0s1 BSD (942M) 0 16 - free - (8.0k) 16 1930241 1 !0 (942M)
 

littlezeus11

Dabbler
Joined
Sep 20, 2013
Messages
17
the complete gpart show with the 8 hdd inserted:

[root@nas2 ~]# gpart show
=> 34 5860533101 mfisyspd6 GPT (2.7T)
34 2048 1 ms-ldm-metadata (1.0M)
2082 260096 2 ms-reserved (127M)
262178 5860270957 3 ms-ldm-data (2.7T)

=> 63 15224769 da0 MBR (7.3G)
63 1930257 1 freebsd [active] (942M)
1930320 63 - free - (31k)
1930383 1930257 2 freebsd (942M)
3860640 3024 3 freebsd (1.5M)
3863664 41328 4 freebsd (20M)
3904992 11319840 - free - (5.4G)

=> 0 1930257 da0s1 BSD (942M)
0 16 - free - (8.0k)
16 1930241 1 !0 (942M)
 

littlezeus11

Dabbler
Joined
Sep 20, 2013
Messages
17
here is the report of gpart status:

[root@nas2 ~]# gpart status
Name Status Components
mfisyspd6p1 OK mfisyspd6
mfisyspd6p2 OK mfisyspd6
mfisyspd6p3 OK mfisyspd6
da0s1 OK da0
da0s2 OK da0
da0s3 OK da0
da0s4 OK da0
da0s1a OK da0s1
 

Dusan

Guru
Joined
Jan 29, 2013
Messages
1,165
Hmm, you can use gpart to clone the partition table from one drive to another. However, I'm not sure if ZFS will handle it correctly and I'm currently traveling so I can't test it.
For example, to clone the partition table from ada0 to ada1, ada2, ada3 you can run:
gpart backup ada0 | gpart restore -F ada1 ada2 ada3
However, I think new gptids will be generated. I believe zpool import can still figure it out, but I'm not 100% sure.
 
Status
Not open for further replies.
Top