SOLVED ZFS snapshots and mounted wrong.

Status
Not open for further replies.

risho

Dabbler
Joined
May 21, 2016
Messages
18
So started having this problem since the last update... It may be related to the update, or it may have been due to my own incompetence.. I'm not sure.

Before the update I had my zfs pool mounted to /mnt/stuff. Afterward the update all most of my data was lost. I went into the snapshot tab and cloned a snapshot since it was the only real option I could find, and luckily all of my data was still there. (snapshots are amazing. this would have been an absolute disaster were this not the case) The problem is now it is mounted as /mnt/stuff/clone_of_stuff... which breaks absolutely everything that uses a filepath on my system. Plex, bit torrent, syncthing etc since the filepath changed.
I don't know what I need to do to get clone of stuff mounted as /mnt/stuff, and also make it so that my system started taking snapshots of that instead of the broken original.

Hopefully that all makes sense...
aINtCX0.png
 
Last edited:

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
The simplest way to recover from a bad update is to choose the previous boot environment at startup. There have been many issues with the 9.10.1-U1 update, and I would suggest you roll back.

As to what you were trying to do, there are ways to do it, but at this point, I would roll back and see what it looks like then. Because what you explained isn't normal.
 

risho

Dabbler
Joined
May 21, 2016
Messages
18
thanks for the tip, but I actually tried booting into a previous boot environment, and the problem presisted. /mnt/stuff is still broken and /mnt/stuff/clone_of_stuff is still the correct one, but mismounted. if i knew how to mount clone_of_stuff to /mnt/stuff such that it appeared correctly in the web interface, and i could make it so that it automatically snapshots the clone instead of the broken volume I would be fine... other than vbox being broken, but that's a different issue which seems like it will be sorted in a future update.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Would you mind opening a bug? This shouldn't be happening. Before making any other changes, could you save a debug (using the GUI).

Then provide teh output of the following two commands in CODE tags.

zpool status

zfs list
 

risho

Dabbler
Joined
May 21, 2016
Messages
18
I wouldn't mind opening a bug, though I don't think I am competent enough to evaluate whether this problem was due to the update or just incedental to me doing something to mess it up on my own. I wouldn't want to contaminate your bug list with something that may be my own fault. I uploaded the debug if anyone cares to look at it.

Code:
zpool status
  pool: freenas-boot
state: ONLINE
  scan: scrub repaired 0 in 0h1m with 0 errors on Mon Sep  5 03:46:24 2016
config:

   NAME  STATE  READ WRITE CKSUM
   freenas-boot  ONLINE  0  0  0
	gptid/1d41a34c-1e94-11e6-91ef-0cc47aaa636e  ONLINE  0  0  0

errors: No known data errors

  pool: stuff
state: ONLINE
  scan: scrub repaired 0 in 45h22m with 0 errors on Mon Sep 19 21:22:37 2016
config:

   NAME  STATE  READ WRITE CKSUM
   stuff  ONLINE  0  0  0
	raidz2-0  ONLINE  0  0  0
	gptid/cc5824f6-1e96-11e6-9767-0cc47aaa636e  ONLINE  0  0  0
	gptid/cd03b0a0-1e96-11e6-9767-0cc47aaa636e  ONLINE  0  0  0
	gptid/cf9a714f-1e96-11e6-9767-0cc47aaa636e  ONLINE  0  0  0
	gptid/b258aebd-73c1-11e6-8fd5-0cc47aaa636e  ONLINE  0  0  0
	gptid/978a2214-27c9-11e6-b5a9-0cc47aaa636e  ONLINE  0  0  0
	gptid/d97deba0-1e96-11e6-9767-0cc47aaa636e  ONLINE  0  0  0
	gptid/dc998ebe-1e96-11e6-9767-0cc47aaa636e  ONLINE  0  0  0

errors: No known data errors

Code:
zfs list
NAME  USED  AVAIL  REFER  MOUNTPOINT
freenas-boot  1.94G  26.9G  31K  none
freenas-boot/ROOT  1.91G  26.9G  25K  none
freenas-boot/ROOT/9.10-STABLE-201605240427  45K  26.9G  491M  /
freenas-boot/ROOT/9.10-STABLE-201606072003  39K  26.9G  611M  /
freenas-boot/ROOT/9.10-STABLE-201606270534  39K  26.9G  612M  /
freenas-boot/ROOT/9.10.1  200K  26.9G  635M  /
freenas-boot/ROOT/9.10.1-U1  1.91G  26.9G  637M  /
freenas-boot/ROOT/Initial-Install  1K  26.9G  480M  legacy
freenas-boot/ROOT/Pre-9.10-STABLE-201605021851-115613  1K  26.9G  482M  legacy
freenas-boot/ROOT/default  36K  26.9G  489M  legacy
freenas-boot/grub  31.6M  26.9G  6.34M  legacy
stuff  9.41T  2.85T  608G  /mnt/stuff
stuff/.system  19.3M  2.85T  208K  legacy
stuff/.system/configs-d8bf7623f2464a4f944b29c3b2f43a27  15.0M  2.85T  15.0M  legacy
stuff/.system/cores  1.22M  2.85T  1.22M  legacy
stuff/.system/rrd-d8bf7623f2464a4f944b29c3b2f43a27  192K  2.85T  192K  legacy
stuff/.system/samba4  815K  2.85T  815K  legacy
stuff/.system/syslog-d8bf7623f2464a4f944b29c3b2f43a27  1.81M  2.85T  1.81M  legacy
stuff/auto-20160925.1000-4w-clone  11.5M  2.85T  9.00T  /mnt/stuff/auto-20160925.1000-4w-clone
stuff/jails  18.2G  2.85T  400K  /mnt/stuff/jails
stuff/jails/.warden-template-VirtualBox-4.3.12  813M  2.85T  813M  /mnt/stuff/jails/.warden-template-VirtualBox-4.3.12
stuff/jails/.warden-template-pluginjail  598M  2.85T  585M  /mnt/stuff/jails/.warden-template-pluginjail
stuff/jails/.warden-template-standard  2.75G  2.85T  2.63G  /mnt/stuff/jails/.warden-template-standard
stuff/jails/Virtualbox  93.5M  2.85T  875M  /mnt/stuff/jails/Virtualbox
stuff/jails/deluge  3.72G  2.85T  5.07G  /mnt/stuff/jails/deluge
stuff/jails/emby_1  1.85G  2.85T  2.41G  /mnt/stuff/jails/emby_1
stuff/jails/owncloud_1  1.03G  2.85T  1.59G  /mnt/stuff/jails/owncloud_1
stuff/jails/plexmediaserver_1  1.77G  2.85T  2.30G  /mnt/stuff/jails/plexmediaserver_1
stuff/jails/plexmediaserver_2  5.25G  2.85T  5.78G  /mnt/stuff/jails/plexmediaserver_2
stuff/jails/syncthing_1  277M  2.85T  821M  /mnt/stuff/jails/syncthing_1
stuff/jails/transmission_1  115M  2.85T  687M  /mnt/stuff/jails/transmission_1
 
Last edited:

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
So the "stuff" dataset has a mountpoint of /mnt/stuff which looks correct. What is the output of ls /mnt/stuff ? And zfs mount ?
 

risho

Dabbler
Joined
May 21, 2016
Messages
18
Code:
ls
./  ../  auto-20160925.1000-4w-clone/ jails/  mounts/


Ah, sorry, I misspoke a little bit in my original post, going back and reading it now. It's not totally empty, and the top level directories are still there, and some sublevel directories are still there aswell, though most of the data inside of the directories are missing. I know it's an important distiction the difference between "all lost" and "most of it lost". I should have written it better...

Also all of my jails(aside from vbox) still function correctly and do not appear to be altered in any way.

Code:
freenas-boot/ROOT/9.10.1-U1  /
freenas-boot/grub  /boot/grub
stuff  /mnt/stuff
stuff/auto-20160925.1000-4w-clone  /mnt/stuff/auto-20160925.1000-4w-clone
stuff/jails  /mnt/stuff/jails
stuff/jails/.warden-template-VirtualBox-4.3.12  /mnt/stuff/jails/.warden-template-VirtualBox-4.3.12
stuff/jails/.warden-template-pluginjail  /mnt/stuff/jails/.warden-template-pluginjail
stuff/jails/.warden-template-standard  /mnt/stuff/jails/.warden-template-standard
stuff/jails/Virtualbox  /mnt/stuff/jails/Virtualbox
stuff/jails/deluge  /mnt/stuff/jails/deluge
stuff/jails/emby_1  /mnt/stuff/jails/emby_1
stuff/jails/owncloud_1  /mnt/stuff/jails/owncloud_1
stuff/jails/plexmediaserver_1  /mnt/stuff/jails/plexmediaserver_1
stuff/jails/plexmediaserver_2  /mnt/stuff/jails/plexmediaserver_2
stuff/jails/syncthing_1  /mnt/stuff/jails/syncthing_1
stuff/jails/transmission_1  /mnt/stuff/jails/transmission_1
stuff/.system  /var/db/system
stuff/.system/cores  /var/db/system/cores
stuff/.system/samba4  /var/db/system/samba4
stuff/.system/syslog-d8bf7623f2464a4f944b29c3b2f43a27  /var/db/system/syslog-d8bf7623f2464a4f944b29c3b2f43a27
stuff/.system/rrd-d8bf7623f2464a4f944b29c3b2f43a27  /var/db/system/rrd-d8bf7623f2464a4f944b29c3b2f43a27
stuff/.system/configs-d8bf7623f2464a4f944b29c3b2f43a27  /var/db/system/configs-d8bf7623f2464a4f944b29c3b2f43a27


I don't know what caused the corruption still though. I don't recall running any commands that would have broken anything. It's certainly possible in my attempt at troubleshooting virtualbox that I messed something up, though I don't recall anything, and I think I would have noticed. Either way, I think that if I were to just remount the clone as "stuff" and make it autosnapshot that, that it would fix everything.

Code:
/mnt/stuff/auto-20160925.1000-4w-clone# ls
./  ../  jails/  mounts/


since that has everything seemingly the way it should be.
 

risho

Dabbler
Joined
May 21, 2016
Messages
18
I could just move all of the files from /mnt/stuff/clone_of_stuff to /mnt/stuff and then delete the clone, but that just sounds like asking for trouble especially since I am so unfirmilliar with how zfs works.. I don't want to cause myself any more problems than I already have. Is there any reason to not do this?

edit: upon reflection, it seems to me that move large amounts of data around like that would probably immediately fill up my disk due to snapshots trying to remember the data where it was before.

edit2: So I deleted the clone and rolled back to a previous snapshot(this was the thing I couldn't figure out how to do. I did it in the cli since I couldn't seem to find a way to do it in the webui) after reading about it on the oracle docs page. I booted up into a previous boot environment, and hopefully now everything will be working correctly -fingers crossed-.
 
Last edited:
Status
Not open for further replies.
Top