Why is my Jail 91 Gigs

Status
Not open for further replies.

Cytomax

Explorer
Joined
Nov 29, 2015
Messages
67
I am running an emby server in a jail
I noticed that under Volume-->Dataset-->Jails--> emby_1
emby_1 was taking up 90 Gigs

and

my back up of emby_1 is up to 450 Gigs because of 2 weeks worth of snapshots

I tried to browse the emby_1 folder by adding it as an smbshare but i didnt really see anything that would add up to 91 Gigs

i tried du in the command line and it shows it as 1.6 Terrabytes but that is because of the attached storage with the media to the jail

i tried
du -hs * | sort -h

and all it shows is around 1.6 Terrabytes of media which is expected because that is the added storage to the jail but the rest of the file sizes added up together maybe make 5 Gigs

Why does it show emby_1 as taking up 90 Gigs?
 
Last edited:

Cytomax

Explorer
Joined
Nov 29, 2015
Messages
67
Sorry for getting back to you so late life has been crazy busy... here is the output

Code:
% zfs list
NAME													  USED  AVAIL  REFER  MOUNTPOINT
BACKUPVOLUME											 2.58T  2.69T	88K  /mnt/BACKUPVOLUME
BACKUPVOLUME/jails										457G  2.69T   116K  /mnt/BACKUPVOLUME/jails
BACKUPVOLUME/jails/.warden-template-pluginjail-11.0-x64   539M  2.69T   539M  /mnt/BACKUPVOLUME/jails/.warden-template-pluginjail-11.0-x64
BACKUPVOLUME/jails/.warden-template-standard-11.0-x64	1.78G  2.69T  1.78G  /mnt/BACKUPVOLUME/jails/.warden-template-standard-11.0-x64
BACKUPVOLUME/jails/emby_1								 455G  2.69T  5.90G  /mnt/BACKUPVOLUME/jails/emby_1
BACKUPVOLUME/unixset									 2.99G  2.69T  2.99G  /mnt/BACKUPVOLUME/unixset
BACKUPVOLUME/windowsset								  2.13T  2.69T  2.12T  /mnt/BACKUPVOLUME/windowsset
freenas-boot											 4.70G  9.35G	64K  none
freenas-boot/ROOT										4.67G  9.35G	29K  none
freenas-boot/ROOT/11.0-U4								 148K  9.35G   727M  /
freenas-boot/ROOT/11.1-RELEASE							278K  9.35G   825M  /
freenas-boot/ROOT/11.1-U1								 297K  9.35G   825M  /
freenas-boot/ROOT/11.1-U2								 377K  9.35G   833M  /
freenas-boot/ROOT/11.1-U4								4.67G  9.35G   836M  /
freenas-boot/ROOT/Initial-Install						   1K  9.35G   736M  legacy
freenas-boot/ROOT/default								 138K  9.35G   736M  legacy
freenas-boot/grub										6.85M  9.35G  6.85M  legacy
volume												   2.22T  3.04T	96K  /mnt/volume
volume/.system										   1.20G  3.04T   604K  legacy
volume/.system/configs-5ece5c906a8f4df886779fae5cade8a5  50.6M  3.04T  50.6M  legacy
volume/.system/configs-a7c4a4a3d45a4720a1e6b8ad799731fd  26.2M  3.04T  25.3M  legacy
volume/.system/cores									 2.04M  3.04T  1.51M  legacy
volume/.system/rrd-5ece5c906a8f4df886779fae5cade8a5		96K  3.04T	96K  legacy
volume/.system/rrd-a7c4a4a3d45a4720a1e6b8ad799731fd	   673M  3.04T  20.2M  legacy
volume/.system/samba4									6.26M  3.04T   704K  legacy
volume/.system/syslog-5ece5c906a8f4df886779fae5cade8a5   11.5M  3.04T  11.5M  legacy
volume/.system/syslog-a7c4a4a3d45a4720a1e6b8ad799731fd   16.7M  3.04T  5.57M  legacy
volume/jails											 85.8G  3.04T   116K  /mnt/volume/jails
volume/jails/.warden-template-pluginjail-11.0-x64		 539M  3.04T   539M  /mnt/volume/jails/.warden-template-pluginjail-11.0-x64
volume/jails/.warden-template-standard-11.0-x64		  1.78G  3.04T  1.78G  /mnt/volume/jails/.warden-template-standard-11.0-x64
volume/jails/emby_1									  83.5G  3.04T  5.90G  /mnt/volume/jails/emby_1
volume/unixset										   2.99G  3.04T  2.99G  /mnt/volume/unixset
volume/windowsset										2.14T  3.04T  2.13T  /mnt/volume/windowsset



I dont know why emby_1 is taking 83 Gigs.... all my media is stored elsewhere and it is in the terrabytes
I am going to delete all my snap shots and see what happens

Thanks in Advance
 

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,977
Snapshots and metadata. That's what's taking all your space. My Plex jail is 61.7 GB and I don't have any snapshots.
 

Cytomax

Explorer
Joined
Nov 29, 2015
Messages
67
VICTORY
I just deleted all my snapshots.... looks like that was the problem.....


Code:
% zfs list
NAME													  USED  AVAIL  REFER  MOUNTPOINT
BACKUPVOLUME											 2.14T  3.13T	88K  /mnt/BACKUPVOLUME
BACKUPVOLUME/jails									   8.17G  3.13T   116K  /mnt/BACKUPVOLUME/jails
BACKUPVOLUME/jails/.warden-template-pluginjail-11.0-x64   539M  3.13T   539M  /mnt/BACKUPVOLUME/jails/.warden-template-pluginjail-11.0-x64
BACKUPVOLUME/jails/.warden-template-standard-11.0-x64	1.78G  3.13T  1.78G  /mnt/BACKUPVOLUME/jails/.warden-template-standard-11.0-x64
BACKUPVOLUME/jails/emby_1								5.86G  3.13T  5.86G  /mnt/BACKUPVOLUME/jails/emby_1
BACKUPVOLUME/unixset									 2.99G  3.13T  2.99G  /mnt/BACKUPVOLUME/unixset
BACKUPVOLUME/windowsset								  2.12T  3.13T  2.12T  /mnt/BACKUPVOLUME/windowsset
freenas-boot											 4.70G  9.35G	64K  none
freenas-boot/ROOT										4.67G  9.35G	29K  none
freenas-boot/ROOT/11.0-U4								 148K  9.35G   727M  /
freenas-boot/ROOT/11.1-RELEASE							278K  9.35G   825M  /
freenas-boot/ROOT/11.1-U1								 297K  9.35G   825M  /
freenas-boot/ROOT/11.1-U2								 377K  9.35G   833M  /
freenas-boot/ROOT/11.1-U4								4.67G  9.35G   836M  /
freenas-boot/ROOT/Initial-Install						   1K  9.35G   736M  legacy
freenas-boot/ROOT/default								 138K  9.35G   736M  legacy
freenas-boot/grub										6.85M  9.35G  6.85M  legacy
volume												   2.14T  3.13T	96K  /mnt/volume
volume/.system										   1.20G  3.13T   604K  legacy
volume/.system/configs-5ece5c906a8f4df886779fae5cade8a5  50.6M  3.13T  50.6M  legacy
volume/.system/configs-a7c4a4a3d45a4720a1e6b8ad799731fd  26.2M  3.13T  25.3M  legacy
volume/.system/cores									 2.45M  3.13T  1.40M  legacy
volume/.system/rrd-5ece5c906a8f4df886779fae5cade8a5		96K  3.13T	96K  legacy
volume/.system/rrd-a7c4a4a3d45a4720a1e6b8ad799731fd	   674M  3.13T  20.6M  legacy
volume/.system/samba4									6.27M  3.13T   672K  legacy
volume/.system/syslog-5ece5c906a8f4df886779fae5cade8a5   11.5M  3.13T  11.5M  legacy
volume/.system/syslog-a7c4a4a3d45a4720a1e6b8ad799731fd   17.1M  3.13T  5.83M  legacy
volume/jails											 7.65G  3.13T   116K  /mnt/volume/jails
volume/jails/.warden-template-pluginjail-11.0-x64		 539M  3.13T   539M  /mnt/volume/jails/.warden-template-pluginjail-11.0-x64
volume/jails/.warden-template-standard-11.0-x64		  1.78G  3.13T  1.78G  /mnt/volume/jails/.warden-template-standard-11.0-x64
volume/jails/emby_1									  5.34G  3.13T  5.86G  /mnt/volume/jails/emby_1
volume/unixset										   2.99G  3.13T  2.99G  /mnt/volume/unixset
volume/windowsset										2.13T  3.13T  2.13T  /mnt/volume/windowsset

 

toliman

Cadet
Joined
Feb 9, 2014
Messages
7
yeah, found that one out the hard way myself. plex swarmed out to 382gb after a month, i hadn't noticed an automatic snapshot process had sort of been misconfigured, and was creating a snapshot every 2 hours, instead of every 2nd day.

There ended up being hundreds of snapshots in a week. If it recurs, and you want to keep the snapshots auto-generating, but occasionally prune it back when you want space back, SSH into the machine, and use

Code:
sudo zfs list -t snapshot -o name | grep FOLDERNAME_GOES_HERE* |grep auto |sed 's| |\\ |g' |sudo xargs -n 1 zfs destroy -vr

essentially it ...
grabs the list of snapshots from ZFS by name|
filters/selects those that match the folder name |
filters selects only those that have auto in the name |
replaces/fixes up the spaces in the folder(s) or file names so they can be deleted |
sudo xargs will delete the snapshots, one at a time, and print out any problems if there are any. you can also test the delete by changing -vr to -npvr which will do a dry-run and print out more detailed information

There's also a few other cleanup things you can script, such as deleting everything up till today using a script, e.g.
Code:
#!/bin/bash
d=$(date "+%Y%m%d")
sudo zfs list -t snapshot -o name | grep ^tank* |grep auto |grep -v $d |sed 's| |\\ |g' |sudo xargs -n 1 zfs destroy -dvr

similar process as before,
grep ^tank* filters for lines starting with tank/...
and 'grep -v' inverts the search so it doesn't show items with today's date.

The way snapshots work, this won't clear all of the space, but it will purge a lot of iterations and just keep today's. i'd suggest if you want to test, remove the sudo xargs subset until you have a list of the snapshots to work with, and then do the dry run (-n) before deleting. be careful using the more extreme flags like -f -d or -R, -d can delete locked/held or protected snapshots, -f will force, -R will recursively travel up the filesystem.

-v and -r are the easier option to 'clean house' every so often.
 
Last edited:
Status
Not open for further replies.
Top