Register for the iXsystems Community to get an ad-free experience and exclusive discounts in our eBay Store.

TrueNAS: aapltm snapshots pile up

Western Digital Drives - The Preferred Drives of FreeNAS and TrueNAS CORE

Borja Marcos

Member
Joined
Nov 24, 2014
Messages
110
Hello,

I have a server updated from FreeNAS 11.3 to TrueNAS 12.0.

I set up a Time Machine multiuser share (yes, one dataset per user) and I see aapltm snapshots being created but none of them is being deleted. Some of them have been around for almost a month and some users have more than 80 snapshots.

I don't see any log message mentioning problems with them. And the TM share has no additional auxiliary parameters. I just used the TM preset.

Any ideas?
 

Patrick M. Hausen

Dedicated Sage
Joined
Nov 25, 2013
Messages
3,008
Isn't the Time Machine policy to only start deleting old data when space gets tight? Did you set a per user/share quota? Are some users reaching that already?
 

sretalla

Wizened Sage
Joined
Jan 1, 2016
Messages
4,289
Isn't the Time Machine policy to only start deleting old data when space gets tight?
If you read the GUI, it says hourly snapshots kept for 24 hours, daily for a month, weekly for all previous months.

And indeed deletion of oldest starts when space is tight.
 

spitfire

Member
Joined
May 25, 2012
Messages
41
Isn't the Time Machine policy to only start deleting old data when space gets tight? Did you set a per user/share quota? Are some users reaching that already?
If you read the GUI, it says hourly snapshots kept for 24 hours, daily for a month, weekly for all previous months.

And indeed deletion of oldest starts when space is tight.
I think you've confused Time Machine's backup set retention policy with what OP has been asking for - how long Truenas retains snapshots of dataset holding them.
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
7,376
Snapshot maintenance in this case is handled by the user's smbd process during SMB2 tree connect to the user's personal time machine share. If user stops connecting to share, snapshots are left as-is until next tree connect. We try to keep a minimum of 24 snapshots (configurable via auxiliary parameter) for at least 7 days before pruning them down. If number is growing without bounds, or space management is becoming a problem, then I can revisit / fix the logic in question.
 

sretalla

Wizened Sage
Joined
Jan 1, 2016
Messages
4,289
I think you've confused Time Machine's backup set retention policy with what OP has been asking for - how long Truenas retains snapshots of dataset holding them.
I see aapltm snapshots being created but none of them is being deleted. Some of them have been around for almost a month and some users have more than 80 snapshots.
I was just pointing out how many snapshots the OP could expect to see for each user since it was clear to me that they were already looking at the ZFS snapshots created by time machine.
 

Borja Marcos

Member
Joined
Nov 24, 2014
Messages
110
I have a system with roughly 60 users. Some of them have more than 100 appltm snapshots, which really seems like too many. Others (like my own dataset) keep more reasonable numbers in the 30's.

I can't find it, is there documentation on how to adjust this a bit?

Thank you!

An example:

Code:
# zfs list -t snapshot -d 1 -o name,creation main/homes/user | fgrep aapl
main/homes/user@aapltm-1609747623          Mon Jan  4  9:07 2021
main/homes/user@aapltm-1609751463          Mon Jan  4 10:11 2021
main/homes/user@aapltm-1609754659          Mon Jan  4 11:04 2021
main/homes/user@aapltm-1609758556          Mon Jan  4 12:09 2021
main/homes/user@aapltm-1609761781          Mon Jan  4 13:03 2021
main/homes/user@aapltm-1610005414          Thu Jan  7  8:43 2021
main/homes/user@aapltm-1610009276          Thu Jan  7  9:47 2021
main/homes/user@aapltm-1610016912          Thu Jan  7 11:55 2021
main/homes/user@aapltm-1610020735          Thu Jan  7 12:58 2021
main/homes/user@aapltm-1610089674          Fri Jan  8  8:07 2021
main/homes/user@aapltm-1610093180          Fri Jan  8  9:06 2021
main/homes/user@aapltm-1610103875          Fri Jan  8 12:04 2021
main/homes/user@aapltm-1610350143          Mon Jan 11  8:29 2021
main/homes/user@aapltm-1610357990          Mon Jan 11 10:39 2021
main/homes/user@aapltm-1610362667          Mon Jan 11 11:57 2021
main/homes/user@aapltm-1610439461          Tue Jan 12  9:17 2021
main/homes/user@aapltm-1610447620          Tue Jan 12 11:33 2021
main/homes/user@aapltm-1610448787          Tue Jan 12 11:53 2021
main/homes/user@aapltm-1610456666          Tue Jan 12 14:04 2021
main/homes/user@aapltm-1610693693          Fri Jan 15  7:54 2021
main/homes/user@aapltm-1610700381          Fri Jan 15  9:46 2021
main/homes/user@aapltm-1610715253          Fri Jan 15 13:54 2021
main/homes/user@aapltm-1610971052          Mon Jan 18 12:57 2021
main/homes/user@aapltm-1611043017          Tue Jan 19  8:56 2021
main/homes/user@aapltm-1611044779          Tue Jan 19  9:26 2021
main/homes/user@aapltm-1611059612          Tue Jan 19 13:33 2021
main/homes/user@aapltm-1611215838          Thu Jan 21  8:57 2021
main/homes/user@aapltm-1611217544          Thu Jan 21  9:25 2021
main/homes/user@aapltm-1611220920          Thu Jan 21 10:22 2021
main/homes/user@aapltm-1611225485          Thu Jan 21 11:38 2021
main/homes/user@aapltm-1611228793          Thu Jan 21 12:33 2021
main/homes/user@aapltm-1611229712          Thu Jan 21 12:48 2021
main/homes/user@aapltm-1611239194          Thu Jan 21 15:26 2021
main/homes/user@aapltm-1611305193          Fri Jan 22  9:46 2021
main/homes/user@aapltm-1611306638          Fri Jan 22 10:10 2021
main/homes/user@aapltm-1611308635          Fri Jan 22 10:43 2021
main/homes/user@aapltm-1611314109          Fri Jan 22 12:15 2021
main/homes/user@aapltm-1611318645          Fri Jan 22 13:30 2021
main/homes/user@aapltm-1611320946          Fri Jan 22 14:09 2021
main/homes/user@aapltm-1611562635          Mon Jan 25  9:17 2021
main/homes/user@aapltm-1611565074          Mon Jan 25  9:57 2021
main/homes/user@aapltm-1611583414          Mon Jan 25 15:03 2021
main/homes/user@aapltm-1611651833          Tue Jan 26 10:03 2021
main/homes/user@aapltm-1611664572          Tue Jan 26 13:36 2021
main/homes/user@aapltm-1611730908          Wed Jan 27  8:01 2021
main/homes/user@aapltm-1611760574          Wed Jan 27 16:16 2021
main/homes/user@aapltm-1611824046          Thu Jan 28  9:54 2021
main/homes/user@aapltm-1611826276          Thu Jan 28 10:31 2021
main/homes/user@aapltm-1611836819          Thu Jan 28 13:27 2021
main/homes/user@aapltm-1611837958          Thu Jan 28 13:45 2021
main/homes/user@aapltm-1611912181          Fri Jan 29 10:23 2021
main/homes/user@aapltm-1611917833          Fri Jan 29 11:57 2021
main/homes/user@aapltm-1611921889          Fri Jan 29 13:04 2021
main/homes/user@aapltm-1611925095          Fri Jan 29 13:58 2021
main/homes/user@aapltm-1612174429          Mon Feb  1 11:13 2021
main/homes/user@aapltm-1612175427          Mon Feb  1 11:30 2021
main/homes/user@aapltm-1612176662          Mon Feb  1 11:51 2021
main/homes/user@aapltm-1612182000          Mon Feb  1 13:20 2021
main/homes/user@aapltm-1612339712          Wed Feb  3  9:08 2021
main/homes/user@aapltm-1612355508          Wed Feb  3 13:31 2021
main/homes/user@aapltm-1612427084          Thu Feb  4  9:24 2021
main/homes/user@aapltm-1612430973          Thu Feb  4 10:29 2021
main/homes/user@aapltm-1612432225          Thu Feb  4 10:50 2021
main/homes/user@aapltm-1612434732          Thu Feb  4 11:32 2021
main/homes/user@aapltm-1612442343          Thu Feb  4 13:39 2021
main/homes/user@aapltm-1612517186          Fri Feb  5 10:26 2021
main/homes/user@aapltm-1612528855          Fri Feb  5 13:40 2021
main/homes/user@aapltm-1612768262          Mon Feb  8  8:11 2021
main/homes/user@aapltm-1612773111          Mon Feb  8  9:31 2021
main/homes/user@aapltm-1612795379          Mon Feb  8 15:42 2021
main/homes/user@aapltm-1612857440          Tue Feb  9  8:57 2021
main/homes/user@aapltm-1612873924          Tue Feb  9 13:32 2021
main/homes/user@aapltm-1612942262          Wed Feb 10  8:31 2021
main/homes/user@aapltm-1612960632          Wed Feb 10 13:37 2021
main/homes/user@aapltm-1613115309          Fri Feb 12  8:35 2021
main/homes/user@aapltm-1613132110          Fri Feb 12 13:15 2021
main/homes/user@aapltm-1613133970          Fri Feb 12 13:46 2021
main/homes/user@aapltm-1613145290          Fri Feb 12 16:54 2021
main/homes/user@aapltm-1613388215          Mon Feb 15 12:23 2021
main/homes/user@aapltm-1613391078          Mon Feb 15 13:11 2021
main/homes/user@aapltm-1613459464          Tue Feb 16  8:11 2021
main/homes/user@aapltm-1613462484          Tue Feb 16  9:01 2021
main/homes/user@aapltm-1613472171          Tue Feb 16 11:42 2021
main/homes/user@aapltm-1613479070          Tue Feb 16 13:37 2021
main/homes/user@aapltm-1613548208          Wed Feb 17  8:50 2021
main/homes/user@aapltm-1613564265          Wed Feb 17 13:17 2021
main/homes/user@aapltm-1613636853          Thu Feb 18  9:27 2021
main/homes/user@aapltm-1613639250          Thu Feb 18 10:07 2021
main/homes/user@aapltm-1613643467          Thu Feb 18 11:17 2021
main/homes/user@aapltm-1613652043          Thu Feb 18 13:40 2021
main/homes/user@aapltm-1613988172          Mon Feb 22 11:02 2021
main/homes/user@aapltm-1613997188          Mon Feb 22 13:33 2021
main/homes/user@aapltm-1614067573          Tue Feb 23  9:06 2021
main/homes/user@aapltm-1614154138          Wed Feb 24  9:09 2021
main/homes/user@aapltm-1614159575          Wed Feb 24 10:39 2021
main/homes/user@aapltm-1614169995          Wed Feb 24 13:33 2021
main/homes/user@aapltm-1614247194          Thu Feb 25 10:59 2021
main/homes/user@aapltm-1614256559          Thu Feb 25 13:36 2021
main/homes/user@aapltm-1614343651          Fri Feb 26 13:47 2021
main/homes/user@aapltm-1614592646          Mon Mar  1 10:57 2021
main/homes/user@aapltm-1614601839          Mon Mar  1 13:30 2021
main/homes/user@aapltm-1614782926          Wed Mar  3 15:48 2021
main/homes/user@aapltm-1614847230          Thu Mar  4  9:40 2021
main/homes/user@aapltm-1614931850          Fri Mar  5  9:10 2021
main/homes/user@aapltm-1614939989          Fri Mar  5 11:26 2021
main/homes/user@aapltm-1614943480          Fri Mar  5 12:24 2021
main/homes/user@aapltm-1614956568          Fri Mar  5 16:02 2021
main/homes/user@aapltm-1615199404          Mon Mar  8 11:30 2021
main/homes/user@aapltm-1615204227          Mon Mar  8 12:50 2021
main/homes/user@aapltm-1615205448          Mon Mar  8 13:10 2021
main/homes/user@aapltm-1615206487          Mon Mar  8 13:28 2021
main/homes/user@aapltm-1615282210          Tue Mar  9 10:30 2021
main/homes/user@aapltm-1615293041          Tue Mar  9 13:30 2021
main/homes/user@aapltm-1615361667          Wed Mar 10  8:34 2021
main/homes/user@aapltm-1615368974          Wed Mar 10 10:36 2021
main/homes/user@aapltm-1615461973          Thu Mar 11 12:26 2021
main/homes/user@aapltm-1615466835          Thu Mar 11 13:47 2021
main/homes/user@aapltm-1615554012          Fri Mar 12 14:00 2021
main/homes/user@aapltm-1615793077          Mon Mar 15  8:24 2021
main/homes/user@aapltm-1615898082          Tue Mar 16 13:34 2021
main/homes/user@aapltm-1615968890          Wed Mar 17  9:15 2021
main/homes/user@aapltm-1615974744          Wed Mar 17 10:52 2021
main/homes/user@aapltm-1615980444          Wed Mar 17 12:27 2021
main/homes/user@aapltm-1615984147          Wed Mar 17 13:29 2021
main/homes/user@aapltm-1616059525          Thu Mar 18 10:25 2021
main/homes/user@aapltm-1616398319          Mon Mar 22  8:32 2021
main/homes/user@aapltm-1616411022          Mon Mar 22 12:03 2021
root@truenas:~ #
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
7,376
The retention is configurable based on the following two parameters:
Code:
tmprotect:retention = 7 #days
tmprotect:min_snaps = 24

Current algorithm is to remove all snaps older than retention period as long as we are left with min_snaps when it is complete. We err on side of caution because the time machine backup is opaque to us. It is better to have more snapshots than too few in case of having to restore/fix a broken time machine backup.
 

Borja Marcos

Member
Joined
Nov 24, 2014
Messages
110
Thank you. It makes sense.

What I am wondering is, despite snapshots being almost transparent, what happens if you have around 4000? Maybe I am piling up too many.
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
7,376
Thank you. It makes sense.

What I am wondering is, despite snapshots being almost transparent, what happens if you have around 4000? Maybe I am piling up too many.
4000 isn't too many, but you can try tweaking those values. For instance:

tmprotect:retention = 14
tmprotect:min_snaps = 12

I could probably make algorithm more complex and improve heuristics for when we take a snapshot. The goal though is to try to keep at least one good restore point.
 

Borja Marcos

Member
Joined
Nov 24, 2014
Messages
110
Thank you.

Not an easy decision, making it more complicated can create its own share of problems!
 

JRM

Newbie
Joined
Jul 28, 2016
Messages
2
Please correct if needed - my real world experience is that the aapltm snapshots use dataset space like traditional snapshots, however i dont think clientside Time Machine app has a way to account for this.

In my case my Multiuser TM dataset(s) need to have quotas, and even with reasonably larger dataset than complete backup sizes, depending on the size of incremental backups, that space will also get eaten by the unmitigated creation of these snapshots. This space won't be 'cleaned' by the Time Machine app of course. I've received "backup failed" errors because of this, and I need to manually delete snapshots to workaround.

Even with changing the above tmprotect parameters to more reasonable numbers in my case, i'm finding that im not seeing the desired behavior of pruning down the snapshot accumulation to sufficient levels. I may go back to basic smb share with regular snapshot tasks until there is a more controlled way to manage the mutliuser TM snapshots.
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
7,376
Please correct if needed - my real world experience is that the aapltm snapshots use dataset space like traditional snapshots, however i dont think clientside Time Machine app has a way to account for this.

In my case my Multiuser TM dataset(s) need to have quotas, and even with reasonably larger dataset than complete backup sizes, depending on the size of incremental backups, that space will also get eaten by the unmitigated creation of these snapshots. This space won't be 'cleaned' by the Time Machine app of course. I've received "backup failed" errors because of this, and I need to manually delete snapshots to workaround.

Even with changing the above tmprotect parameters to more reasonable numbers in my case, i'm finding that im not seeing the desired behavior of pruning down the snapshot accumulation to sufficient levels. I may go back to basic smb share with regular snapshot tasks until there is a more controlled way to manage the mutliuser TM snapshots.
They're applied through ZFS user quotas. Generally you don't want any datasets pushing up against a ZFS dataset quota because they're strictly enforced (possibly resulting in significant performance degradation as you hit them). User quotas are softer. They shouldn't be unmitigated in creation amounts, but I can adjust the algorithm when I have time. (maybe read the plist for last backup and take a snapshot during tree disconnect if that has changed).
 
  • Like
Reactions: JRM
Top