Snapshots and used space

Status
Not open for further replies.

encbox

Dabbler
Joined
Mar 27, 2017
Messages
25
I needed to clean up some space, because volume has hit the 80 %. So I started to delete a lot of large files, which I didn't need any more, like large downloads etc. After that I noticed, that little to none space was freed through this.
So I thought about snapshots. Listing the snapshots in the gui didn't give me any large snapshots, maybe 10 about 5G, but I most likely deleted files like 400G.
That felt a little strange. So I started to use the shell. Looked like the same result as in the gui (output shortened to the relevant snapshot):

zfs list -ro space -t all
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
(...)
daten/austausch@auto-20180512.1700-2m - 0 - - - -
(...)

So this snapshot does not use any space, right? I decided to destroy snapshots:

zfs list -t snapshot -o name | grep ^daten/austausch@auto | tail -n +16 | xargs -n 1 zfs destroy -vr
(...)
will destroy daten/austausch@auto-20180512.1700-2m
will reclaim 408G
(...)

How can this snapshot silently use 408G...obviously I don't understand snapshots. Could anyone explain that to me?
 

MrToddsFriends

Documentation Browser
Joined
Jan 12, 2015
Messages
1,338
The space used by snapshots is often confusing on first sight. The written ZFS property is useful to get some more insight. This property is not displayed when using the -o space option of the zfs comand and tracks how much space is being used by the snapshot.

In the following I'm showing an example.
Code:
~ # zfs list -r -o space,refer,written -t all volume0/cbackup | head -20
NAME								   AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD  REFER  WRITTEN
volume0/cbackup						1.27T   537G	  132G	405G			  0		  0   405G	7.12G
volume0/cbackup@auto-20180505.1200-2w	  -   512K		 -	   -			  -		  -   389G	 389G
volume0/cbackup@auto-20180505.1800-2w	  -   512K		 -	   -			  -		  -   389G	 512K
volume0/cbackup@auto-20180506.0600-2w	  -   504K		 -	   -			  -		  -   389G	 520K
volume0/cbackup@auto-20180506.1200-2w	  -	  0		 -	   -			  -		  -   387G	11.1G
volume0/cbackup@auto-20180506.1800-2w	  -	  0		 -	   -			  -		  -   387G		0
volume0/cbackup@auto-20180507.0600-2w	  -	  0		 -	   -			  -		  -   387G		0
volume0/cbackup@auto-20180507.1200-2w	  -	  0		 -	   -			  -		  -   388G	6.21G
volume0/cbackup@auto-20180507.1800-2w	  -	  0		 -	   -			  -		  -   388G		0
volume0/cbackup@auto-20180508.0600-2w	  -	  0		 -	   -			  -		  -   388G		0
volume0/cbackup@auto-20180508.1200-2w	  -	  0		 -	   -			  -		  -   384G	15.2G
volume0/cbackup@auto-20180508.1800-2w	  -	  0		 -	   -			  -		  -   384G		0
volume0/cbackup@auto-20180509.0600-2w	  -	  0		 -	   -			  -		  -   384G		0
volume0/cbackup@auto-20180509.1200-2w	  -	  0		 -	   -			  -		  -   388G	13.6G
volume0/cbackup@auto-20180509.1800-2w	  -	  0		 -	   -			  -		  -   388G		0
volume0/cbackup@auto-20180510.0600-2w	  -	  0		 -	   -			  -		  -   388G		0
volume0/cbackup@auto-20180510.1200-2w	  -	  0		 -	   -			  -		  -   387G	12.3G
volume0/cbackup@auto-20180510.1800-2w	  -	  0		 -	   -			  -		  -   387G		0
volume0/cbackup@auto-20180511.0600-2w	  -	  0		 -	   -			  -		  -   387G		0

In this example three snapshots are created per day while on most days data is written only once to the dataset in question, usually something in the order of 10G.

When trying to get information about the amount of space that would be reclaimed when deleting the oldest three snapshots three individual zfs destroy -n -v calls show the values of the USED property of these snapshots (which is not what I wanted to know).
Code:
~ # zfs destroy -n -v volume0/cbackup@auto-20180505.1200-2w
would destroy volume0/cbackup@auto-20180505.1200-2w
would reclaim 512K
~ # zfs destroy -n -v volume0/cbackup@auto-20180505.1800-2w
would destroy volume0/cbackup@auto-20180505.1800-2w
would reclaim 512K
~ # zfs destroy -n -v volume0/cbackup@auto-20180506.0600-2w
would destroy volume0/cbackup@auto-20180506.0600-2w
would reclaim 504K

If, on contrary, I'm using a single zfs destroy -n -v call with these three snapshots as a range I'm getting the expected information about the space that would be reclaimed. The value shown is within my expectation because it falls into the range of data typically written to that dataset per day.
Code:
~ # zfs destroy -n -v volume0/cbackup@auto-20180505.1200-2w%auto-20180506.0600-2w
would destroy volume0/cbackup@auto-20180505.1200-2w
would destroy volume0/cbackup@auto-20180505.1800-2w
would destroy volume0/cbackup@auto-20180506.0600-2w
would reclaim 13.2G

Note that the stated amount of data that would be reclaimed when deleting these three snapshots is neither the sum of the values in the USED column nor it is the sum of values in the WRITTEN column of the zfs list command shown above. So yes, space used by snapshots can be a confusing issue.
 

encbox

Dabbler
Joined
Mar 27, 2017
Messages
25
Thank you for this detailed explanation, which made me a little bit less confused.;) So essentially that means if I use the gui and have a look under the tab "snapshots" and the column "used" that does in no way tell me, how much space would be reclaimed by destroying the snapshot?
So I conclude practically the value "keep snapshot" snapshots under "periodic snapshot task" should best be set to a relatively short period and otherwise snapshots should be left alone.
 

MrToddsFriends

Documentation Browser
Joined
Jan 12, 2015
Messages
1,338
So essentially that means if I use the gui and have a look under the tab "snapshots" and the column "used" that does in no way tell me, how much space would be reclaimed by destroying the snapshot?

My outline would be that there is no other way than a issuing a properly formed zfs destroy -n -v command at the console to see how much space would be reclaimed by destroying a particular (range of) snapshot(s).

So I conclude practically the value "keep snapshot" snapshots under "periodic snapshot task" should best be set to a relatively short period and otherwise snapshots should be left alone.

Ideally, the capacity of a pool should be to chosen properly to serve the snapshot lifetime needs without hassle.
 
Status
Not open for further replies.
Top