Lost track of 2TiB... Snapshot data usage?

Status
Not open for further replies.

Keven

Contributor
Joined
Aug 10, 2016
Messages
114
Hi,


after noticing that one of my share was using 6.1 TiB on my server I checked what was using all that space. I right click on the root folder to check the space it's using: 4.49TiB

I am doing snapshot on that share so maybe it's that. I looked at all snapshot from that dataset and when I add up the used space of those snapshot I get 28 GiB

so 4.49 TiB + 28 GiB = 4.51 TiB

so am I calculating the space required of the snapshot wrong?

Here is how the dataset/share are setup

Dataset: "Bibliotheque" =6.1TiB (60%)
SMB Share of "Bibliotheque" is called "Corrussante"

freenas dataset.png

freenas smb share.png

freenas Corrussante.png

SnapShot are filtered for "bibliotheque" so it wont show non revelant snapshot from other dataset
freenas Snapshot Filter by Dataset.png
 

Keven

Contributor
Joined
Aug 10, 2016
Messages
114
Nobody has an idea??
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Post the output of zfs list -r -o name,used,usedbysnapshots,usedbydataset,usedbychildren Vol1, in [CODE][/CODE] tags, please.
 

Keven

Contributor
Joined
Aug 10, 2016
Messages
114
look like it was the snapshot @Ericloewe, but if I want to free up some space, how do I find the proper snapshot to delete. Because I am still viewing only the 2 (12-15 Gig) snapshots and not the hundreds of Gig Snapshots that, I guest, i'm suppose to see. Otherwise, how it would make 1.57TiB of used Snapshot?
Code:
[root@freenas ~]# zfs list -r -o name,used,usedbysnapshots,usedbydataset,usedbychildren Vol1										
NAME													USED  USEDSNAP  USEDDS  USEDCHILD										 
Vol1												   6.31T		 0	151K	  6.31T										 
Vol1/.system										   48.9M		 0   1.88M	  47.0M										 
Vol1/.system/configs-7f4d67ae16c94917b949456bb9f364ad  39.4M		 0   39.4M		  0										 
Vol1/.system/cores									 1.57M		 0   1.57M		  0										 
Vol1/.system/rrd-7f4d67ae16c94917b949456bb9f364ad	   140K		 0	140K		  0										 
Vol1/.system/samba4									 622K		 0	622K		  0										 
Vol1/.system/syslog-7f4d67ae16c94917b949456bb9f364ad   5.32M		 0   5.32M		  0										 
Vol1/Backup											 418K		 0	140K	   279K										 
Vol1/Backup/Keven									   140K		 0	140K		  0										 
Vol1/Backup/VBL-PC									  140K		 0	140K		  0										 
Vol1/Bibliotheque									  6.06T	 1.57T   4.49T		  0										 
Vol1/Jail											  10.3G	  349K	215K	  10.3G										 
Vol1/Jail/.warden-template-pluginjail				   518M	  128K	518M		  0										 
Vol1/Jail/.warden-template-standard					2.04G	  116K   2.04G		  0										 
Vol1/Jail/crashplan_1								   406M		 0	406M		  0										 
Vol1/Jail/plexmediaserver_1							7.27G		 0   7.27G		  0										 
Vol1/Jail/transmission_1								121M		 0	121M		  0										 
Vol1/Test											   429M		 0	429M		  0										 
Vol1/VBL												238G	 72.1M	237G		  0										 
[root@freenas ~]#
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
You can list the snapshots by adding the option -t snapshot to the command, after the -r.
You might also want to narrow it down to Vol1/Bibliotheque, since that's what you're interested in.
 

Keven

Contributor
Joined
Aug 10, 2016
Messages
114
Still no trace of the 1.57TiB @Ericloewe
Code:
[root@freenas ~]# zfs list -r -o name,used,usedbysnapshots,usedbydataset,usedbychildren Vol1/Bibliotheque						  

NAME				USED  USEDSNAP  USEDDS  USEDCHILD																			  
Vol1/Bibliotheque  6.06T	 1.57T   4.49T		  0																			  


[root@freenas ~]# zfs list -r -t snapshot -o name,used,usedbysnapshots,usedbydataset,usedbychildren Vol1/Bibliotheque			  

NAME										USED  USEDSNAP  USEDDS  USEDCHILD													  
Vol1/Bibliotheque@auto-20160912.1738-100y   116K		 -	   -		  -													  
Vol1/Bibliotheque@auto-20170102.1743-100y  12.4G		 -	   -		  -													  
Vol1/Bibliotheque@auto-20170130.1743-100y  2.58M		 -	   -		  -													  
Vol1/Bibliotheque@auto-20170227.1743-100y  15.6G		 -	   -		  -													  
Vol1/Bibliotheque@auto-20170327.1743-100y  2.27M		 -	   -		  -													  
Vol1/Bibliotheque@auto-20170424.1743-100y  2.25M		 -	   -		  -													  
Vol1/Bibliotheque@auto-20170522.1743-100y  2.12M		 -	   -		  -													  
Vol1/Bibliotheque@auto-20170619.1743-100y  10.3M		 -	   -		  -													  
Vol1/Bibliotheque@auto-20170717.1743-100y  10.4M		 -	   -		  -													  
Vol1/Bibliotheque@auto-20170814.1743-100y  10.6M		 -	   -		  -													  
Vol1/Bibliotheque@auto-20170911.1743-100y  10.7M		 -	   -		  -													  
Vol1/Bibliotheque@auto-20171009.1743-100y  10.6M		 -	   -		  -													  
Vol1/Bibliotheque@auto-20171023.1755-8w	14.8M		 -	   -		  -													  
Vol1/Bibliotheque@auto-20171030.1755-8w	2.25M		 -	   -		  -													  
Vol1/Bibliotheque@auto-20171106.1743-100y	  0		 -	   -		  -													  
Vol1/Bibliotheque@auto-20171106.1755-8w		0		 -	   -		  -													  
Vol1/Bibliotheque@auto-20171113.1755-8w	7.38M		 -	   -		  -													  
Vol1/Bibliotheque@auto-20171120.1755-8w	3.16M		 -	   -		  -													  
Vol1/Bibliotheque@auto-20171127.1755-8w	3.14M		 -	   -		  -													  
Vol1/Bibliotheque@auto-20171204.1743-100y	  0		 -	   -		  -													  
Vol1/Bibliotheque@auto-20171204.1755-8w		0		 -	   -		  -													  
Vol1/Bibliotheque@auto-20171210.1757-1w	 988K		 -	   -		  -													  
Vol1/Bibliotheque@auto-20171211.1755-8w		0		 -	   -		  -													  
Vol1/Bibliotheque@auto-20171211.1757-1w		0		 -	   -		  -													  
Vol1/Bibliotheque@auto-20171212.1757-1w	 633M		 -	   -		  -													  
Vol1/Bibliotheque@auto-20171213.1757-1w	 140K		 -	   -		  -													  
Vol1/Bibliotheque@auto-20171214.1757-1w	 151K		 -	   -		  -													  
Vol1/Bibliotheque@auto-20171215.1757-1w	 163K		 -	   -		  -													  
Vol1/Bibliotheque@auto-20171216.1757-1w	 639K		 -	   -		  -													  
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Are you sure there isn't more of that output missing?
 

Keven

Contributor
Joined
Aug 10, 2016
Messages
114

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
What does zpool status say? I don't like the look of this at all.
 

Keven

Contributor
Joined
Aug 10, 2016
Messages
114
Code:
[root@freenas ~]# zpool status																									 
  pool: Vol1																														
 state: ONLINE																													 
  scan: scrub repaired 0 in 7h17m with 0 errors on Sun Dec  3 07:17:47 2017														 
config:																															 
																																   
	   NAME											STATE	 READ WRITE CKSUM												 
	   Vol1											ONLINE	   0	 0	 0												 
		 raidz1-0									  ONLINE	   0	 0	 0												 
		   gptid/66423eb3-793b-11e6-8ded-0cc47a855da0  ONLINE	   0	 0	 0												 
		   gptid/67032817-793b-11e6-8ded-0cc47a855da0  ONLINE	   0	 0	 0												 
		   gptid/67bace11-793b-11e6-8ded-0cc47a855da0  ONLINE	   0	 0	 0												 
		   gptid/687b73a5-793b-11e6-8ded-0cc47a855da0  ONLINE	   0	 0	 0												 
																																   
errors: No known data errors																										
																																   
  pool: freenas-boot																												
 state: ONLINE																													 
  scan: scrub repaired 0 in 0h12m with 0 errors on Thu Nov 16 03:57:41 2017														 
config:																															 
																																   
	   NAME										  STATE	 READ WRITE CKSUM													
	   freenas-boot								  ONLINE	   0	 0	 0													
		 gptid/0f80fb1e-78f3-11e6-9309-0cc47a855da0  ONLINE	   0	 0	 0													
																																   
errors: No known data errors																										
[root@freenas ~]#
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
No errors huh... Well, I'd say file a bug report and post the issue number here.
 

Keven

Contributor
Joined
Aug 10, 2016
Messages
114
Joined
Jul 3, 2015
Messages
926
I've seen a similar thing before a few months ago. In the end it transpired to be a snapshot (or two) taking up the space but it didn't explain that very well via the UI or CLI. After I released snapshots one at a time starting with the oldest I suddenly reclaimed the space. The 'USED' field in the UI is very misleading and in my experience doesn't mean what you think it means. The 'REFER' column is much more helpful as in my case my dataset was about 5TiB in size but some snapshots referred to 6TiB or even more so that meant they were holding onto deleted data. Interestingly my output of zfs list -o did should that I was using a lot in snapshots but it didn't show me which snapshots were holding the biggest amount of data.
 

fracai

Guru
Joined
Aug 22, 2012
Messages
1,212
After I released snapshots one at a time starting with the oldest I suddenly reclaimed the space. The 'USED' field in the UI is very misleading and in my experience doesn't mean what you think it means. The 'REFER' column is much more helpful as in my case my dataset was about 5TiB in size but some snapshots referred to 6TiB or even more so that meant they were holding onto deleted data.
Right, USED just tells you how much space is unique to that snapshot. If two snapshots share the same 1TB of data it won't be reported, even if that data isn't part of the current state of that dataset. You can use 'zfs diff' to search for large file deletions, or just start deleting snapshots.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
You're right, I forgot about that detail. The REFER column should give an overview of where the stuff ended up.
 
Status
Not open for further replies.
Top