Lost all personal data in my ZFS?

Status
Not open for further replies.

Mr Prince

Explorer
Joined
Oct 27, 2014
Messages
60
Hi,

I have freenas 9.3 working with 2x2TB WD RED.
Yesterday it stop working giving some strange error I read in other thread due to USB memory corruption. So I installed freenas (both versions 9.3 and 9.10) on new USB. It start correctly but after "IMPORT VOLUME" (old ZFS HDD) there are only my Jails folders but EMPTY and all other personal folders are missing. The volumes seems to be almost empty (but not corrumpted)!!!!

Why this? How can I check the file inside my volume? HELP ME PLEASE!!

Thanks.
 

Attachments

  • Immagine.png
    Immagine.png
    134.9 KB · Views: 29

Sakuru

Guru
Joined
Nov 20, 2015
Messages
527
Please post the output of zpool import in [CODE] tags.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Seems to have imported fine, but there is no data.

I guess you don't have a back up?
 

Mr Prince

Explorer
Joined
Oct 27, 2014
Messages
60
Please post the output of zpool import in [CODE] tags.

CODE BEFORE IMPORT
Code:
[root@freenas ~]# zpool import																									 
   pool: NAS_Degio																												 
	 id: 5924914207795353833																										
  state: ONLINE																													 
status: The pool was last accessed by another system.																			 
action: The pool can be imported using its name or numeric identifier and														 
		the '-f' flag.																											 
   see: http://illumos.org/msg/ZFS-8000-EY																						 
config:																															
																																	
		NAS_Degio									 ONLINE																		
		  gptid/84558c5d-1428-11e5-996c-00016cfbeff3  ONLINE	


AFTER the automatic IMPORT VOLUME from WebGUI, the shell says there aren't volume to import to show


Seems to have imported fine, but there is no data. I guess you don't have a back up?
Backup 2 month2 old. I have some very important file stored in theese last weeks :(
 

Sakuru

Guru
Joined
Nov 20, 2015
Messages
527
Hmm, you said you have 2 hard drives, but zpool import shows that pool only has 1 drive.
What is the output of these commands?
Code:
camcontrol devlist
zpool status
glabel list
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Hope it wasn't a two disk stripe.
 

Mr Prince

Explorer
Joined
Oct 27, 2014
Messages
60
First of all to do all this recover experiment I'm working on a virtual machine more powerful to manage different version of freenas.

About the HDD, it was my mistake, (on this pool I've only 1 WD RED HDD, on the other pools (used for work) I've the 2 WD RED in RAID1).

But about this problem I don't think the RAID concerns. Maybe could it be a ZFS POOL UPGRADE problem?

Meanwhile this is the response of the command suggested.

Code:
[root@freenas ~]# camcontrol devlist																								
<Virtual HD 1.1.1>				 at scbus0 target 0 lun 0 (ada0,pass0)															
<Msft Virtual Disk 1.0>			at scbus2 target 0 lun 0 (da0,pass1)   


Code:
[root@freenas ~]# zpool status																									 
  pool: NAS_Degio																												   
state: ONLINE																													 
status: Some supported features are not enabled on the pool. The pool can														   
		still be used, but some features are unavailable.																		   
action: Enable all features using 'zpool upgrade'. Once this is done,															   
		the pool may no longer be accessible by software that does not support													 
		the features. See zpool-features(7) for details.																			
  scan: scrub repaired 0 in 1h35m with 0 errors on Sat Oct 22 16:35:48 2016														 
config:																															 
																																	
		NAME										  STATE	 READ WRITE CKSUM													
		NAS_Degio									 ONLINE	   0	 0	 0													
		  gptid/84558c5d-1428-11e5-996c-00016cfbeff3  ONLINE	   0	 0	 0
																														 
errors: No known data errors																										
																																	
  pool: freenas-boot																												
state: ONLINE																													 
  scan: none requested																											 
config:																															 
																																	
		NAME		STATE	 READ WRITE CKSUM																					 
		freenas-boot  ONLINE	   0	 0	 0																					
		  da0p2	 ONLINE	   0	 0	 0																					 
																																	
errors: No known data errors


Code:
[root@freenas ~]# glabel list																									   
Geom name: ada0p2																												   
Providers:																														 
1. Name: gptid/84558c5d-1428-11e5-996c-00016cfbeff3																				 
   Mediasize: 1998251364352 (1.8T)																								 
   Sectorsize: 512																												 
   Stripesize: 4096																												 
   Stripeoffset: 0																												 
   Mode: r1w1e1																													 
   secoffset: 0																													 
   offset: 0																														
   seclength: 3902834696																											
   length: 1998251364352																											
   index: 0																														 
Consumers:																														 
1. Name: ada0p2																													 
   Mediasize: 1998251364352 (1.8T)																								 
   Sectorsize: 512																												 
   Stripesize: 4096																												 
   Stripeoffset: 0																												 
   Mode: r1w1e2																													 
																																	
Geom name: da0p1																													
Providers:																														 
1. Name: gptid/cfa0f214-a366-11e6-bf2b-00155d01080f																				 
   Mediasize: 524288 (512K)																										 
   Sectorsize: 512																												 
   Stripesize: 4096																												 
   Stripeoffset: 0																												 
   Mode: r0w0e0																													 
   secoffset: 0																													 
   offset: 0																														
   seclength: 1024																												 
   length: 524288																												   
   index: 0																														 
Consumers:																														 
1. Name: da0p1																													 
   Mediasize: 524288 (512K)																										 
   Sectorsize: 512																												 
   Stripesize: 4096																												 
   Stripeoffset: 0																												 
   Mode: r0w0e0	
 

rs225

Guru
Joined
Jun 28, 2014
Messages
878
I would check that all datasets are mounted. If they aren't, you might just see an empty directory.
 

Mr Prince

Explorer
Joined
Oct 27, 2014
Messages
60
I would check that all datasets are mounted. If they aren't, you might just see an empty directory.
Sorry can you explain better what it means? How can I check that all dataset are correctly mounted? Thanks!!
 

Sakuru

Guru
Joined
Nov 20, 2015
Messages
527
zfs list
 

Mr Prince

Explorer
Joined
Oct 27, 2014
Messages
60
This is what appen, and is the same as I can read from STORAGE>VOLUME in WebGUI

Code:
[root@freenas ~]# zfs list																										
NAME														 USED  AVAIL  REFER  MOUNTPOINT										
NAS_Degio												   1.48G  1.75T   724K  /mnt/NAS_Degio									
NAS_Degio/.system										   3.79M  1.75T   880K  legacy											
NAS_Degio/.system/configs-a75e42d926e4418a8e21357ce29c963c   144K  1.75T   144K  legacy											
NAS_Degio/.system/cores									  776K  1.75T   776K  legacy											
NAS_Degio/.system/rrd-0fcc2e5d27cf4533bbf15d7176de289c	   144K  1.75T   144K  legacy											
NAS_Degio/.system/rrd-a75e42d926e4418a8e21357ce29c963c	   144K  1.75T   144K  legacy											
NAS_Degio/.system/rrd-e3a01ff88d4540aa946ce3669c642f8d	   144K  1.75T   144K  legacy											
NAS_Degio/.system/rrd-f6da24756e2f4dee86c3a9c9fb75829f	   144K  1.75T   144K  legacy											
NAS_Degio/.system/samba4									 400K  1.75T   400K  legacy											
NAS_Degio/.system/syslog-0fcc2e5d27cf4533bbf15d7176de289c	144K  1.75T   144K  legacy											
NAS_Degio/.system/syslog-a75e42d926e4418a8e21357ce29c963c	528K  1.75T   528K  legacy											
NAS_Degio/.system/syslog-e3a01ff88d4540aa946ce3669c642f8d	284K  1.75T   284K  legacy											
NAS_Degio/.system/syslog-f6da24756e2f4dee86c3a9c9fb75829f	144K  1.75T   144K  legacy											
NAS_Degio/Z_DOWNLOAD										 168K  1.75T   168K  /mnt/NAS_Degio/Z_DOWNLOAD						
NAS_Degio/Z_FILM											 160K  1.75T   160K  /mnt/NAS_Degio/Z_FILM							
NAS_Degio/Z_OWN_ALE										  160K  1.75T   160K  /mnt/NAS_Degio/Z_OWN_ALE						  
NAS_Degio/Z_SERIETV										  168K  1.75T   168K  /mnt/NAS_Degio/Z_SERIETV						  
NAS_Degio/Z_TORRENT										  144K  1.75T   144K  /mnt/NAS_Degio/Z_TORRENT						  
NAS_Degio/jails											 1.41G  1.75T   200K  /mnt/NAS_Degio/jails							  
NAS_Degio/jails/.warden-template-pluginjail				  721M  1.75T  3.06M  /mnt/NAS_Degio/jails/.warden-template-pluginjail  
NAS_Degio/jails/.warden-template-pluginjail-9.2-x64		  721M  1.75T  3.06M  /mnt/NAS_Degio/jails/.warden-template-pluginjail-9.
2-x64																															  
NAS_Degio/jails/.warden-template-pluginjail-open-x86		 200K  1.75T   200K  /mnt/NAS_Degio/jails/.warden-template-pluginjail-op
en-x86																															
NAS_Degio/jails/owncloud_1								   792K  1.75T  3.46M  /mnt/NAS_Degio/jails/owncloud_1					
NAS_Degio/jails/plexmediaserver_1						   3.12M  1.75T  5.82M  /mnt/NAS_Degio/jails/plexmediaserver_1			
NAS_Degio/jails/sonarr_1									 680K  1.75T  3.37M  /mnt/NAS_Degio/jails/sonarr_1					
NAS_Degio/jails/transmission_1							   512K  1.75T  3.21M  /mnt/NAS_Degio/jails/transmission_1				
NAS_Degio/jails_2											144K  1.75T   144K  /mnt/NAS_Degio/jails_2							
NAS_Degio/syslog											 144K  1.75T   144K  /mnt/NAS_Degio/syslog							
freenas-boot												1.04G  6.65G   144K  none											  
freenas-boot/ROOT										   1.03G  6.65G   144K  none											  
freenas-boot/ROOT/default								   1.03G  6.65G  1.03G  legacy											
freenas-boot/grub										   8.50M  6.65G  8.50M  legacy											
[root@freenas ~]#																																																																																																									 


In addition i had more personal folder (NAS_Degio/myfolder/ etc etc) that doesn't appear in this list.

All the folder seems to be empty. How can it be possible?!?!
 

Sakuru

Guru
Joined
Nov 20, 2015
Messages
527
It doesn't look like you have improperly mounted datasets. What does zpool history return?
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
In addition i had more personal folder (NAS_Degio/myfolder/ etc etc) that doesn't appear in this list.

All the folder seems to be empty. How can it be possible?!?!
It's possible because this isn't a list of folders, it's a listing of datasets. You likely have your files and folders in the root NAS_Degio dataset, which isn't a great idea.

what's the output of ls /mnt/NAS_Degio?
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
You likely have your files and folders in the root NAS_Degio dataset,
Folders, perhaps. Files, not too many--the pool only has 1.48G used, and 1.41G of that is in jails. There's nothing there. No idea what happened to it, but it isn't there now.
 

rs225

Guru
Joined
Jun 28, 2014
Messages
878
It's possible because this isn't a list of folders, it's a listing of datasets. You likely have your files and folders in the root NAS_Degio dataset, which isn't a great idea.

Yes, maybe his problem is he needs to unmount all the other datasets. His data was accidentally stored in the top-level the whole time.
 

Mr Prince

Explorer
Joined
Oct 27, 2014
Messages
60
It doesn't look like you have improperly mounted datasets. What does zpool history return?

The zpool history is too long to copy from shell. How can I copy all that lines from Shell tab?
Theese are the last lines I can copy:

Code:
2016-11-05.08:31:13 zpool export -f NAS_Degio																					   
2016-11-05.08:31:56 zpool import -f -R /mnt 5924914207795353833																	 
2016-11-05.08:32:00 zfs inherit -r mountpoint NAS_Degio																			 
2016-11-05.08:32:00 zpool set cachefile=/data/zfs/zpool.cache NAS_Degio															 
2016-11-05.08:32:00 zfs set aclmode=passthrough NAS_Degio																		   
2016-11-05.08:32:02 zfs set aclinherit=passthrough NAS_Degio																		
2016-11-05.08:32:06 zfs set mountpoint=legacy NAS_Degio/.system																	 
2016-11-05.08:32:24 zpool export -f NAS_Degio																					   
2016-11-05.10:36:58 zpool import -f -R /mnt 5924914207795353833																	 
2016-11-05.10:37:08 zfs inherit -r mountpoint NAS_Degio																			 
2016-11-05.10:37:08 zpool set cachefile=/data/zfs/zpool.cache NAS_Degio															 
2016-11-05.10:37:09 zfs set aclmode=passthrough NAS_Degio																		   
2016-11-05.10:37:14 zfs set aclinherit=passthrough NAS_Degio																		
2016-11-05.10:37:29 zfs set mountpoint=legacy NAS_Degio/.system																	 
2016-11-05.10:37:30 zfs create -o mountpoint=legacy NAS_Degio/.system/syslog-e3a01ff88d4540aa946ce3669c642f8d					   
2016-11-05.10:37:35 zfs create -o mountpoint=legacy NAS_Degio/.system/rrd-e3a01ff88d4540aa946ce3669c642f8d						 
2016-11-05.11:46:38 zpool import -f -R /mnt 5924914207795353833																	 
2016-11-05.11:46:43 zfs inherit -r mountpoint NAS_Degio																			 
2016-11-05.11:46:43 zpool set cachefile=/data/zfs/zpool.cache NAS_Degio															 
2016-11-05.11:46:43 zfs set aclmode=passthrough NAS_Degio																		   
2016-11-05.11:46:45 zfs set aclinherit=passthrough NAS_Degio																		
2016-11-05.11:46:48 zfs set mountpoint=legacy NAS_Degio/.system																	 
2016-11-05.11:48:30 zpool export -f NAS_Degio																					   
2016-11-05.11:49:13 zpool import NAS_Degio																						 
2016-11-05.11:53:33 zpool export NAS_Degio																						 
2016-11-05.11:57:19 zpool import -f -R /mnt 5924914207795353833																	 
2016-11-05.11:57:24 zfs inherit -r mountpoint NAS_Degio																			 
2016-11-05.11:57:24 zpool set cachefile=/data/zfs/zpool.cache NAS_Degio															 
2016-11-05.11:57:24 zfs set aclmode=passthrough NAS_Degio																		   
2016-11-05.11:57:26 zfs set aclinherit=passthrough NAS_Degio																		
2016-11-05.11:57:31 zfs set mountpoint=legacy NAS_Degio/.system																	 
2016-11-06.06:23:40 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 5924914207795353833					
2016-11-06.06:23:40 zpool set cachefile=/data/zfs/zpool.cache NAS_Degio															 
2016-11-08.14:42:36 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 5924914207795353833					
2016-11-08.14:42:36 zpool set cachefile=/data/zfs/zpool.cache NAS_Degio															 
2016-11-08.14:47:45 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 5924914207795353833					
2016-11-08.14:47:45 zpool set cachefile=/data/zfs/zpool.cache NAS_Degio															 
2016-11-09.13:49:09 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 5924914207795353833					
2016-11-09.13:49:09 zpool set cachefile=/data/zfs/zpool.cache NAS_Degio															 
																																	
History for 'freenas-boot':																										 
2016-11-05.09:35:05 zpool create -f -o cachefile=/tmp/zpool.cache -o version=28 -O mountpoint=none -O atime=off -O canmount=off free
nas-boot ada0p2																													 
2016-11-05.09:35:08 zfs create -o canmount=off freenas-boot/ROOT																	
2016-11-05.09:35:08 zfs create -o mountpoint=legacy freenas-boot/ROOT/default													   
2016-11-05.09:35:13 zfs create -o mountpoint=legacy freenas-boot/grub															   
2016-11-05.09:36:41 zpool set bootfs=freenas-boot/ROOT/default freenas-boot														 
2016-11-05.09:36:41 zpool set cachefile=/boot/zfs/rpool.cache freenas-boot	 


It's possible because this isn't a list of folders, it's a listing of datasets. You likely have your files and folders in the root NAS_Degio dataset, which isn't a great idea.

what's the output of ls /mnt/NAS_Degio?

Sorry but where should I store my personal folders and data? This is the output, no trace of "myfolder"
Code:
																															  
[root@freenas ~]# ls /mnt/NAS_Degio																								 
.AppleDB		Z_FILM		  Z_SERIETV			 jails_2															 
Z_DOWNLOAD	  Z_OWN_ALE	   Z_TORRENT	   jails		   syslog


Yes, maybe his problem is he needs to unmount all the other datasets. His data was accidentally stored in the top-level the whole time.

Yes my personal data was stored in /mnt/NAS_Degio/myfolder1/ & /mnt/NAS_Degio/myfolder2/ /mnt/NAS_Degio/myfolder3/ etc
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Yes, maybe his problem is he needs to unmount all the other datasets. His data was accidentally stored in the top-level the whole time.
That would not be consistent with what zfs list shows. Again, that shows a total of 1.48 GB used, 1.41 GB of which is in the jails dataset. There's simply nothing else (of any size) there. If his data were in directories that were overshadowed by datasets, the space would still show as used in the pool. I can't say what's happened to it, but it isn't in that pool any more.
 

rs225

Guru
Joined
Jun 28, 2014
Messages
878
That would not be consistent with what zfs list shows.
You're right; but that doesn't leave any 'good' explanation.

Could the USB failure or subsequent new OS do it?

edit: I notice the drive device names indicate a virtual machine. Wrong drives? Snapshots at the hypervisor level?
 
Last edited:

Mr Prince

Explorer
Joined
Oct 27, 2014
Messages
60
Click "system" -> "advanced" -> "save debug" and post the debug tarball here.

In Debug.tgz there are lots of folders/files what shoul i post? If you prefere HERE I uploaded all the debug zip file.

You're right; but that doesn't leave any 'good' explanation.
Could the USB failure or subsequent new OS do it?
edit: I notice the drive device names indicate a virtual machine. Wrong drives? Snapshots at the hypervisor level?

I also can't find a GOOD explanation!! I'm worried in future it will happen again!!
P.S. About the virtual machine, as already said, I'm using a virtual machines on my first PC only now to do all this test for convenience. But the SAME problem was noticed first time on my NAS after the USB failure problem -> bought new USB -> reinstalled Freenas on it.
 
Status
Not open for further replies.
Top