Migration - Rescue the data after crashed system [Urgent help needed]

Status
Not open for further replies.

rewwer3

Dabbler
Joined
Apr 7, 2016
Messages
16
Good day comrades. Last hope on you folks.

Two days ago got an error "Fatal Trap 12", NAS server seems to be rebooted somehow and did not come back online.
Whole day iv spent of trying geting rid of that "Fatal Trap 12", lots of goooooogling - did not found what can be done. So the last hope is to transfer the whole bunch of disk with raid controller to new server, both of them are semi identical by hardware, so there should not be any sudden things here.

here is hardware tech specs :

- Raid Controller Adaptec 9260-4i - seems to be ОК
- HDD 4 X HGST 2TB = 6TB equals 3.6TB Raid6 Hardware raid - seems to be ОК
- System disk regular 2,5 120GB with freenas installed on it (seems to be not ОК :( "Fatal 12")
Mostly same formactor baseboard - Supermicro X7DB8
RAM - 4*8=32GB (seems OK but without ECC - scares me for some reason)
As it was written earlier on forums - memtest check is OK

Right now it mounted in read-only mode on reburned server
======
/Raid6# zfs list
NAME USED AVAIL REFER MOUNTPOINT
Raid6 2.82T 814G 104K /Raid6
Raid6/BACKUPS 57.9G 814G 57.9G /Raid6/BACKUPS
Raid6/CrashPlanBackUp 96K 814G 96K /Raid6/CrashPlanBackUp
Raid6/ISOLIB 98.4G 814G 98.4G /Raid6/ISOLIB
Raid6/Public 610G 814G 610G /Raid6/Public
Raid6/USRDATA 33.7G 814G 33.7G /Raid6/USRDATA
Raid6/jails 1.41G 814G 112K /Raid6/jails
Raid6/jails/.warden-template-VirtualBox-4.3.12 675M 814G 675M /Raid6/jails/.warden-template-VirtualBox-4.3.12
Raid6/jails/.warden-template-pluginjail 452M 814G 452M /Raid6/jails/.warden-template-pluginjail
Raid6/jails/crashplan_1 322M 814G 719M /Raid6/jails/crashplan_1
Raid6/nfsvhd 499G 814G 499G /Raid6/nfsvhd
Raid6/xscsi 1.55T 1.60T 757G - "Holly grail" to be saved and transfered to other freenas :(
freenas-boot 2.62G 25.2G 288K none
freenas-boot/.system 2.82M 25.2G 368K legacy
freenas-boot/.system/configs-5ece5c906a8f4df886779fae5cade8a5 400K 25.2G 400K legacy
freenas-boot/.system/cores 1.09M 25.2G 1.09M legacy
freenas-boot/.system/rrd-5ece5c906a8f4df886779fae5cade8a5 288K 25.2G 288K legacy
freenas-boot/.system/samba4 432K 25.2G 432K legacy
freenas-boot/.system/syslog-5ece5c906a8f4df886779fae5cade8a5 288K 25.2G 288K legacy
freenas-boot/ROOT 2.58G 25.2G 288K none
freenas-boot/ROOT/FreeNAS-8863f903d550e9d8a1e9f8c73ae9b4f0 1.71G 25.2G 878M /
freenas-boot/ROOT/FreeNAS-9.3-STABLE-201604041648 887M 25.2G 881M /
freenas-boot/ROOT/Initial-Install 256K 25.2G 873M legacy
freenas-boot/ROOT/default 232K 25.2G 873M legacy
freenas-boot/grub 26.7M 25.2G 8.68M legacy
======
* Raid6/xscsi 1.55T 1.60T 757G - not a file, zvol extent :(

Here is a major question how to import/transfer that zvol "xscsi" extent to second healthy freenas server to connect it back to XenServer hyper-visor and get back my virtual machines data back. This is very critical to me about 5 years of team work could be lost... Please help to not the tragedy happens

/ps sorry for my English its not my native language
 

Attachments

  • IMG_20160406_111244.jpg
    IMG_20160406_111244.jpg
    268.1 KB · Views: 257
  • IMG_20160407_210909.jpg
    IMG_20160407_210909.jpg
    254 KB · Views: 269

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Well, if it's mounted then you can copy the data? or did I miss something?

But first wrong thing you've done: no backup of 5 years of work. There's no excuse at all...

Second wrong thing you've maybe done: did you use hardware RAID?
 

rewwer3

Dabbler
Joined
Apr 7, 2016
Messages
16
Thanks for reply, about backups - its a long story, we just started migration process - The lucky day ... previous was killed and freshly reinstalled and prepared for data back migration - as I sad "I'm sooo lucky :oops:"...
some more info in a freshly installed donor I was unable to import zpool correctly trough WebGUI, nas every time gets crush-reboot when i try to do an import task. So a sad earlier - I was able to mount it only in read-only mode/
Other question is it possible to mount or smth else to do ... make Raid6/xscsi avalible trough iscsi protocol ... Couse thats was my first time like that situation...
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Wow... well, you do have a mess on your hands to say the least. Can you provide a debug file from your system?

I will say, the fact that the spacemap is damaged, it's possible that whatever data you were using at the moment of the crash will be unrecoverable. For example, if your VMs were running via iSCSI when the crash occurred, you may not be able to get all of the data off of the zvol. :/
 

rewwer3

Dabbler
Joined
Apr 7, 2016
Messages
16
I have some idea, but do not know how to do that...
emmm - debug file, from wich of them - the "dead one" (2.5 hdd could be still alive) or frankenstein ... this is first experience with zfs and freenas
Right now zpool is in read only mode on "frankenstein" (see earlier description)

need some instructions how to do that... :oops:
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Debug is obtained by going to System -> Advanced -> Save Debug
 

rewwer3

Dabbler
Joined
Apr 7, 2016
Messages
16
Iv also got an a some old backup of config: csl2101-FreeNAS-9.3-STABLE-201601181840-20160130001430.db - could it be helpful in that situation? Iv also tried to load it on fresh installation of freenas - and got the same "Fatal Trap 12" like "first dead"
here is a debug from "Frankenstein"
for some luck i did a full DD of 2.5 system hdd where the first one was installed...
 

Attachments

  • debug-freenas-20160407150330..tgz
    235.4 KB · Views: 205

rs225

Guru
Joined
Jun 28, 2014
Messages
878
It sounds like what might want to happen is use dd to copy the zvol from the read-only pool to another pool with zvol, or even to an actual real disk drive.

Comments?
 

rewwer3

Dabbler
Joined
Apr 7, 2016
Messages
16
Comments?
Do You mean - copy the whole 3.6TB to another NAS ?
I thought to try mount somehow a "Raid6/xscsi" and then try copy out my VMs from it to safer place, but I do not know how to do that... If have any idea and instructions for that "challenge" it could save me from lots of "punches and headache"...
 

rewwer3

Dabbler
Joined
Apr 7, 2016
Messages
16
one more thought: is it possible to partially load a backup config?

the results at all - zpool import Raid6 - couses system reboot, same as try it to from webgui ... very sad day...
 
Last edited:

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
I thought to try mount somehow a "Raid6/xscsi"

But you said it's already mounted?!

Please answer by yes or no: is the pool currently mounted (even in read-only mode)?
 

rewwer3

Dabbler
Joined
Apr 7, 2016
Messages
16
I meaned that the "zpool" are mounted/imported
====
z~# pool status
pool: Raid6
state: ONLINE

scan: scrub repaired 0 in 11h31m with 0 errors on Sun Feb 28 00:33:09 2016
config:

NAME STATE READ WRITE CKSUM
Raid6 ONLINE 0 0 0
gptid/c6bdcc39-81b4-11e5-9c69-00304832be56 ONLINE 0 0 0

errors: No known data errors

pool: freenas-boot
state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(7) for details.
scan: none requested
config:

NAME STATE READ WRITE CKSUM
freenas-boot ONLINE 0 0 0
da0p2 ONLINE 0 0 0

errors: No known data errors
====

and

====
~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
Raid6 2.82T 814G 104K /Raid6
Raid6/BACKUPS 57.9G 814G 57.9G /Raid6/BACKUPS
Raid6/CrashPlanBackUp 96K 814G 96K /Raid6/CrashPlanBackUp
Raid6/ISOLIB 98.4G 814G 98.4G /Raid6/ISOLIB
Raid6/Public 610G 814G 610G /Raid6/Public
Raid6/USRDATA 33.7G 814G 33.7G /Raid6/USRDATA
Raid6/jails 1.41G 814G 112K /Raid6/jails
Raid6/jails/.warden-template-VirtualBox-4.3.12 675M 814G 675M /Raid6/jails/.warden-template-VirtualBox-4.3.12
Raid6/jails/.warden-template-pluginjail 452M 814G 452M /Raid6/jails/.warden-template-pluginjail
Raid6/jails/crashplan_1 322M 814G 719M /Raid6/jails/crashplan_1
Raid6/nfsvhd 499G 814G 499G /Raid6/nfsvhd
Raid6/xscsi 1.55T 1.60T 757G -
freenas-boot 2.62G 25.2G 288K none
freenas-boot/.system 4.45M 25.2G 1.90M legacy
freenas-boot/.system/configs-5ece5c906a8f4df886779fae5cade8a5 496K 25.2G 496K legacy
freenas-boot/.system/cores 1.09M 25.2G 1.09M legacy
freenas-boot/.system/rrd-5ece5c906a8f4df886779fae5cade8a5 288K 25.2G 288K legacy
freenas-boot/.system/samba4 432K 25.2G 432K legacy
freenas-boot/.system/syslog-5ece5c906a8f4df886779fae5cade8a5 288K 25.2G 288K legacy
freenas-boot/ROOT 2.58G 25.2G 288K none
freenas-boot/ROOT/FreeNAS-8863f903d550e9d8a1e9f8c73ae9b4f0 1.71G 25.2G 879M /
freenas-boot/ROOT/FreeNAS-9.3-STABLE-201604041648 887M 25.2G 881M /
freenas-boot/ROOT/Initial-Install 256K 25.2G 873M legacy
freenas-boot/ROOT/default 232K 25.2G 873M legacy
freenas-boot/grub 26.7M 25.2G 8.68M legacy
====

And so right now the "Frankenstein freenas" mostly configured as the previous dead one...
The main task is - try to mount it and publish as iscsi target ...
 

rs225

Guru
Joined
Jun 28, 2014
Messages
878
If you want to image the zvol via iSCSI, that might work. But if you want to mount it remotely with iSCSI and then copy the files inside of it, you might have a problem. Will the client system allow access in read-only mode? Maybe.

The Raid6 pool seems to be using hardware RAID (as a single drive vdev), which is a mistake. ZFS should do raidz2, never hardware RAID. This requires the controller to present the drives in JBOD mode, or as single-drives in the worst case. Somebody else might be able to explain how.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
You are running your zpool on a hardware RAID. This is almost certainly the whole reason you are in the situation you are in. Write hole ftw. To add to it, you cannot monitor SMART. Every one of your disks may be in perfect health, or dying/dead and you have no way of knowing the difference.

What I'd do is ignore the UI and try to do some ZFS replication from the CLI to another box, or do a dd copy to another box via a CIFS/NFS share *from* the other machine. There's no telling if either of those will succeed. Since your zpool is obviously damaged, there's no telling how far you are going to get with touching data in the zpool.

If those 2 ideas don't work.. I think it's a lost cause unless you are going to start spending large sums of money (you know, like $10k+) to get your data back.
 

rewwer3

Dabbler
Joined
Apr 7, 2016
Messages
16
Here is some good news ... iv mounted most of mount points... rescued some data, like NFS (not sure about consitancy, some vhds might be broken) shares + some SMB/CIFS (some db backups files was saved)....
FACEPALM to me... damn "/"
zfs mount Raid6/Public - Thats correct "zfs mount /Raid6/Public" /mnt" - that is not correct ....
zfs mount Raid6/BACKUPS etc
But there main quest still on the way... - Mount "iscsi zvol"
:(
Raid6/xscsi 1.55T 1.60T 757G -

=====
[root@] /Raid6/Public# zfs mount Raid6/xscsi
cannot open 'Raid6/xscsi': operation not applicable to datasets of this type
====
seems to be im a doing smth wrong...

got some ideas from my Russian colleagues ...
Try to fetch iscsi config from old machine and pop it in to "Frankenstein" - /etc/ctl.conf and try to restart "ctld" "service ctld onerestart"... or smth like that... not sure...
Is there any ideas how to treak that thing?

I Gone to take some rest ... it was plenty of stress for me ...
 

rs225

Guru
Joined
Jun 28, 2014
Messages
878
How big is xscsi? If it is less than 3TB, then get an extra 3TB hard drive. Connect it to the system. Suppose it is /dev/ada2. (Make sure!)

dd if=/dev/zvol/raid6/xscsi of=/dev/ada2 bs=64k conv=sparse

When done, disconnect /dev/ada2 and connect it somewhere useful to recover the contents.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
You don't get it.. you cannot "mount" zvols the same as datasets. That's normal because it's block storage. You can access the zvol via things like dd. That's why I said you should dd the zvol.

Look at my output on my FreeNAS Mini:

[root@mini] ~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
freenas-boot 7.75G 6.67G 31K none
freenas-boot/ROOT 7.30G 6.67G 31K none
freenas-boot/ROOT/9.10-STABLE-201603252134 7.30G 6.67G 1019M /
freenas-boot/ROOT/FreeNAS-9.3-STABLE-201511040813 100K 6.67G 1.00G /
freenas-boot/ROOT/FreeNAS-9.3-STABLE-201511280648 86K 6.67G 1.00G /
freenas-boot/ROOT/FreeNAS-9.3-STABLE-201512121950 214K 6.67G 1.01G /
freenas-boot/ROOT/FreeNAS-9.3-STABLE-201601181840 162K 6.67G 1.01G /
freenas-boot/ROOT/FreeNAS-9.3-STABLE-201602031011 327K 6.67G 1.01G /
freenas-boot/grub 427M 6.67G 10.3M legacy

tank 2.49T 1.02T 185G /mnt/tank
tank/VMs-NFS 140G 1.02T 140G /mnt/tank/VMs-NFS
tank/iscsi 2.17T 2.51T 621G -

Totally normal. Now stop trying to tell ZFS to do the wrong thing, use dd and get your data back. ;)
 

rewwer3

Dabbler
Joined
Apr 7, 2016
Messages
16
Good day comrades.
Sorry wor long delay... but im still alive :) ...
Got some news about my problem. Iv got imported zpool, but in readonly mode... and initiated an iscsi connection from hypervisor to it, but smth goes wrong, a can't copy any vm from it.
so here is some more question:
- How to properly copy that stuff to enother server (freenas) without damaging data on a new one?
- is there an option to copy-convert my iscsi extent to file and transfer it to somewhere?
"Kowalski -need options!"(c)
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Good day comrades.
Sorry wor long delay... but im still alive :) ...
Got some news about my problem. Iv got imported zpool, but in readonly mode... and initiated an iscsi connection from hypervisor to it, but smth goes wrong, a can't copy any vm from it.
so here is some more question:
- How to properly copy that stuff to enother server (freenas) without damaging data on a new one?
- is there an option to copy-convert my iscsi extent to file and transfer it to somewhere?
"Kowalski -need options!"(c)

Yep.. I was 2 weeks ahead of you...

What I'd do is ignore the UI and try to do some ZFS replication from the CLI to another box, or do a dd copy to another box via a CIFS/NFS share *from* the other machine. There's no telling if either of those will succeed. Since your zpool is obviously damaged, there's no telling how far you are going to get with touching data in the zpool.

;)
 

rewwer3

Dabbler
Joined
Apr 7, 2016
Messages
16
Good day comrades...
Got some...
My Frankenstein grows a bit by 2x2TiB hdd's for a recovery purpose it was connected to only two available sata ports...
Iv tryed to DD my Raid6/xscsi ... smth goes wrong ... size mismatch and etc and as a result it did not work...
Some more question:
How to proper clone or copy? - Raid6/xscsi to new zpool iv just created from two spare hdd's

zfs list
NAME USED AVAIL REFER MOUNTPOINT
Raid6 2.82T 814G 104K /mnt/Raid6
Raid6/BACKUPS 57.9G 814G 57.9G /mnt/Raid6/BACKUPS
Raid6/CrashPlanBackUp 96K 814G 96K /mnt/Raid6/CrashPlanBackUp
Raid6/ISOLIB 98.4G 814G 98.4G /mnt/Raid6/ISOLIB
Raid6/Public 610G 814G 610G /mnt/Raid6/Public
Raid6/USRDATA 33.7G 814G 33.7G /mnt/Raid6/USRDATA
Raid6/jails 1.41G 814G 112K /mnt/Raid6/jails
Raid6/jails/.warden-template-VirtualBox-4.3.12 675M 814G 675M /mnt/Raid6/jails/.warden-template-VirtualBox-4.3.12
Raid6/jails/.warden-template-pluginjail 452M 814G 452M /mnt/Raid6/jails/.warden-template-pluginjail
Raid6/jails/crashplan_1 322M 814G 719M /mnt/Raid6/jails/crashplan_1
Raid6/nfsvhd 499G 814G 499G /mnt/Raid6/nfsvhd
Raid6/xscsi 1.55T 1.60T 757G - "Original to copy from - in readonly mode - zpool import -o readonly=on -R /mnt Raid6"
Raid6a 2.03T 1.48T 96K /mnt/Raid6a "New pool"
Raid6a/.system 736K 1.48T 104K legacy
Raid6a/.system/configs-4c6211b89a3046dfbe28e409c4a02537 96K 1.48T 96K legacy
Raid6a/.system/cores 96K 1.48T 96K legacy
Raid6a/.system/rrd-4c6211b89a3046dfbe28e409c4a02537 96K 1.48T 96K legacy
Raid6a/.system/samba4 200K 1.48T 200K legacy
Raid6a/.system/syslog-4c6211b89a3046dfbe28e409c4a02537 144K 1.48T 144K legacy
Raid6a/xscsia 2.03T 3.51T 64K - "Destination to copy to"

- seems to be its a last chance for me to recover that damn piece of LVM - Citrix XenServer store a VM's in it...
 
Status
Not open for further replies.
Top