SOLVED Dataset empty after reboot

Status
Not open for further replies.

TToaster

Cadet
Joined
Mar 3, 2016
Messages
4
Hi,

i added a new 4x4TB RaidZ (i know raidz2 is better, and i will change it in a few weeks) Volume to my setup and began filling it with data from another freenas box with rsync. Everything seemed to have worked fine, till i shut the server down to physically move it.

After the restart, one of my datasets that before was filled with about 3TB of data is empty. The strange thing is, even though the data seems to be gone, the space is not available again. So I am wondering if i messed up somewhere and it is just not displayed or w/e anymore.

here some pics/copies:
pool.JPG

It shows 37% used, but all datasets are only 13% ? (missing the 3TB that were in series before)

Code:
[root@Nas] /mnt# zfs list
NAME                                                           USED  AVAIL  REFER  MOUNTPOINT
freenas-boot                                                  2.05G  5.64G    31K  none
freenas-boot/ROOT                                             2.01G  5.64G    25K  none
freenas-boot/ROOT/9.10-STABLE-201604181743                    11.2M  5.64G   505M  /
freenas-boot/ROOT/9.10-STABLE-201604261518                    11.1M  5.64G   511M  /
freenas-boot/ROOT/9.10-STABLE-201605021851                    1.96G  5.64G   532M  /
freenas-boot/ROOT/FreeNAS-5f91faf7204d20c5a639d34396e74b2b    10.8M  5.64G   541M  /
freenas-boot/ROOT/FreeNAS-9.3-STABLE-201604150515             9.14M  5.64G   572M  /
freenas-boot/ROOT/Initial-Install                                1K  5.64G   512M  legacy
freenas-boot/ROOT/Pre-FreeNAS-9.3-STABLE-201602031011-791933     1K  5.64G   550M  legacy
freenas-boot/ROOT/default                                     9.14M  5.64G   559M  legacy
freenas-boot/grub                                             38.8M  5.64G  6.33M  legacy
pool1                                                         3.98T  6.23T  3.00T  /mnt/pool1
pool1/movies                                                   858G  6.23T   858G  /mnt/pool1/movies
pool1/pool1_datastore                                          145G  6.23T   145G  -
pool1/series                                                   174K  6.23T   174K  /mnt/pool1/series


So, I am wondering if i should just destroy the whole volume and redo or if the data could still be around somewhere and i can just restore it?

Thanks for your help!
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
After the restart, one of my datasets that before was filled with about 3TB of data is empty
Which dataset did you copy into? My guess is you used a folder under the root dataset -> pool1/"something other than (movies, pool1_datastore and series)"
 

TToaster

Cadet
Joined
Mar 3, 2016
Messages
4
When I started copying the data, the transfer was kinda slowish with rsync so i tried out zfs send to see if the speed was better. Since it was not, I copied everything with rsync.

Since i was able to access/modify the data from my CIFS share (/mnt/pool1/series), I though everything went fine.

Here are basicly all the commands i used since i made the volume:

On "Backup Server":
Code:
[root@HPNAS] ~# zfs snapshot Raidz/test@s1
[root@HPNAS] ~# zfs send Raidz/test@s1 | ssh 192.168.178.15 zfs receive -F pool1/test
[root@HPNAS] ~# zfs destroy Raidz/test@s1


Then as root on the "Problem Server" I tested if the transfer would be faster, if mounted a share from the Backup and used rsync internal:
Code:
[root@Nas] ~# mount 192.168.178.32:/mnt/Raidz/series /mnt/series-mnt
<<rsync stuff as user>>
[root@Nas] ~# umount 192.168.178.32:/mnt/Raidz/series /mnt/series-mnt

Since the speed stayed the same, no matter which method i used, I restarted the rsync transfer - after which everything seemd complete when i checked it via CIFS share:
Code:
adminmartin@Nas:~ % rsync -avuP --no-perms --no-owner --no-group --progress adminmartin@192.168.178.32:/mnt/Raidz/series/ /mnt/pool1/series/


I would have just wrote it off as fck up and started copying/ordering everything again, but since something is still taking up 3TB of space, I still got hope that it is around somewhere on the Server - I just got no idea where..

Code:
adminmartin@Nas:/mnt % du -sh /mnt/pool1
858G    /mnt/pool1
adminmartin@Nas:/mnt/pool1 % ls -l
total 111
drwxr-xr-x  4 adminmartin  adminmartin  4 May  5 04:10 ./
drwxr-xr-x  4 root  wheel  192 May  5 23:36 ../
drwxrwxr-x+ 14 adminmartin  adminmartin  192 Apr 27 15:59 movies/
drwxrwxr-x+  2 adminmartin  adminmartin  3 May  5 00:54 series/
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
If you do, 'zfs unmount pool1/series' followed by 'cd /mnt/pool1/series', what happens? Is there anything in that directory?
 

TToaster

Cadet
Joined
Mar 3, 2016
Messages
4
If you do, 'zfs unmount pool1/series' followed by 'cd /mnt/pool1/series', what happens? Is there anything in that directory?

Code:
[root@Nas] /mnt/pool1# du -sh series/
3.0T    series/


:) Thanks alot!! Saves me alot of work.

Do I have to change something to prevent this from happening again?
And the overview in "Storage -> Volumes" still shows series as 170KB big.. so how to best fix this? Make a new Dataset, move the files and delete series?
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Rather than mucking with ZFS mount point, just move the currently visible folder called /mnt/pool1/series to /mnt/pool1/pool1-series. Then reboot and using the cli, move the contents of /mnt/pool1/pool1-series to /mnt/pool1/series.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
You have a subdirectory called series, and you have a dataset called series. For some reason, your data got put into the subdirectory, not the dataset. Then when the dataset is mounted, the contents of the subdirectory can't be seen. Best way to address it is what @depasseg suggests.
 

TToaster

Cadet
Joined
Mar 3, 2016
Messages
4
Ok, Thanks guys.

I moved the filled folder and am now moving the files after the reboot.
Looking at the webinterface, it seems the dataset is getting filled.
 
Status
Not open for further replies.
Top