"Smart" periodic snapshots? Dynamic expirations and "lifespan bumps"?

Joined
Oct 22, 2019
Messages
3,641
You can tell I twisted my brain into several knots trying to muster a title that can describe my question...

Is it possible to create a single periodic snapshot task that follows the same principles of "smart" backups?

For example, from a single task...
  • Snapshots are created daily and marked with a 4 weeks expiration
  • Of these a couple of them are tagged for a longer expiration of 6 months
  • Of these, a couple of them are tagged for an even longer expiration of 2 years
  • And finally of these, a couple of them are tagged for an even longer expiration of 10 years

This would theoretically yield you with about a couple dozen daily snapshots that are destroyed on a 4-week conveyor belt, while a couple of them "survive" for another 6 months. Of the eventual handful that survive for another six months, some will be destroyed, while a few will survive for another two years. Only a couple of those from the group that survives for 2 years will be allowed to live for another 10 years. Think of it as snapshot pyramid from a single task.

snapshot-pyramid.png


I'm not sure what the terminology for this procedure is, but I recall backup and filesystem snapshot software (not TrueNAS- nor ZFS-related) using terms like "smart" or "staggered" backups.

Is this possible with TrueNAS, or must you configure a different Task for each expiration lifespan? (I'm hoping it's possible with the former, as it requires fewer overall tasks, which is more elegant.)
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
It should be noted that even if you have 2 or more snapshots taken at the exact same time, but with different retention times, they would not take up any more space than the other. (Except for a little overhead.)

For example, I have hourly snapshots, kept for 1 day, so 24 of them. And, I have a second set of hourly snapshots, kept for 7 days, except that their are only 7 of them. Meaning my "daily" snapshots are constantly over writing themselves every hour until the end of the day.

This is for my laptop, desktop and media server, (all of which use ZFS). That means my 7 daily snapshots are as current to that day, as would be an hourly snapshot. Except that I can get them days later. I use this screwy method to account for shutting down my laptop or desktop.
 

Forza

Explorer
Joined
Apr 28, 2021
Messages
81
And, I have a second set of hourly snapshots, kept for 7 days, except that their are only 7 of them. Meaning my "daily" snapshots are constantly over writing themselves every hour until the end of the day.
Can you explain how this is set up. How do you have only 7 snapshots when you have hourly snapshots created with a retention of 7 days?
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Can you explain how this is set up. How do you have only 7 snapshots when you have hourly snapshots created with a retention of 7 days?
Sure. But remember, this is not a FreeNAS / TrueNAS setup. It's on Linux.

Let's start with the cronjob:
Code:
#
# Snapshot home directory
#
0 * * * * /root/bin/zfs_home_snap >/dev/null 2>&1

Here is what the cronjob does:
Code:
#!/bin/bash
export MY_HOME_DATASET=`df -h /home | tail -1 | cut -f1 -d' '`
export MY_DAY_OF_WEEK="dow_`date +%w`"
export MY_HOUR_OF_DAY="hod_`date +%H`"
###############################################################
#
#    Day of week snapshots
#
# First, destroy old snapshot, if it exists
# Second, create new snapshot
#
###############################################################
/sbin/zfs destroy ${MY_HOME_DATASET}@${MY_DAY_OF_WEEK} >/dev/null 2>&1
/sbin/zfs snap ${MY_HOME_DATASET}@${MY_DAY_OF_WEEK} >/dev/null 2>&1
###############################################################
#
#    Hour of day snapshots
#
# First, destroy old snapshot, if it exists
# Second, create new snapshot
#
###############################################################
/sbin/zfs destroy ${MY_HOME_DATASET}@${MY_HOUR_OF_DAY} >/dev/null 2>&1
/sbin/zfs snap ${MY_HOME_DATASET}@${MY_HOUR_OF_DAY} >/dev/null 2>&1

That's it. Gives me this:
Code:
> zfs list -t all -r rpool/home
NAME                USED  AVAIL     REFER  MOUNTPOINT
rpool/home         2.63G  4.37G      943M  legacy
rpool/home@dow_0   21.8M      -      938M  -
rpool/home@dow_1   16.5M      -      940M  -
rpool/home@dow_2    110M      -     1.01G  -
rpool/home@dow_3   15.2M      -      940M  -
rpool/home@dow_4   18.6M      -      940M  -
rpool/home@hod_13   117M      -     1.06G  -
rpool/home@hod_14  89.3M      -     1.06G  -
rpool/home@hod_15  92.3M      -     1.06G  -
rpool/home@hod_16  80.6M      -     1007M  -
rpool/home@hod_17   150M      -     1.06G  -
rpool/home@hod_18   114M      -     1.05G  -
rpool/home@hod_19   107M      -     1.05G  -
rpool/home@hod_20   107M      -     1.05G  -
rpool/home@hod_21   120M      -     1.05G  -
rpool/home@hod_22   118M      -     1.05G  -
rpool/home@dow_5      0B      -      949M  -
rpool/home@hod_23     0B      -      949M  -
rpool/home@hod_00  11.3M      -      949M  -
rpool/home@hod_01  28.2M      -      955M  -
rpool/home@hod_02  10.9M      -      942M  -
rpool/home@hod_03  10.8M      -      942M  -
rpool/home@hod_04  10.9M      -      942M  -
rpool/home@hod_05  10.8M      -      940M  -
rpool/home@hod_06  10.8M      -      940M  -
rpool/home@hod_07  10.8M      -      940M  -
rpool/home@hod_08  10.6M      -      940M  -
rpool/home@hod_09  10.8M      -      940M  -
rpool/home@hod_10  11.1M      -      940M  -
rpool/home@hod_11   144M      -     1.04G  -
rpool/home@dow_6      0B      -      944M  -
rpool/home@hod_12     0B      -      944M  -
 
Top