Make a replicated snapshot last indefinitely

Glorious1

Guru
Joined
Nov 23, 2014
Messages
1,211
I have auto snapshots and auto replication between my primary and backup pools, Ark and ArkBak. Snapshots have defined expiration, two weeks and 1 year. I want to add a single snapshot (and replicate it) that will persist indefinitely.

I'm creating a manual snapshot and replicating it incrementally on the command line. But it gets deleted on the target end the next automatic replication. Here's what I'm trying:

1. Created manual snapshot on source in TrueNAS GUI > Storage > Snapshots > Add. Named manual-2023-03-30_10-47.

2. Manually replicated it to the backup pool, ArkBak, referencing it to the most recent auto snapshot:

Code:
Tabernacle:~$ sudo zfs send --verbose -i @auto-2023-03-30_00-01-2w Ark/Media@manual-2023-03-30_10-47 | sudo zfs receive -vFdu ArkBak/Media
Password:
send from Ark/Media@auto-2023-03-30_00-01-2w to Ark/Media@manual-2023-03-30_10-47 estimated size is 1.16G
total estimated size is 1.16G
receiving incremental stream of Ark/Media@manual-2023-03-30_10-47 into ArkBak/Media@manual-2023-03-30_10-47
TIME        SENT   SNAPSHOT Ark/Media@manual-2023-03-30_10-47
10:49:36   9.19M   Ark/Media@manual-2023-03-30_10-47
10:49:37    581M   Ark/Media@manual-2023-03-30_10-47
10:49:38   1.14G   Ark/Media@manual-2023-03-30_10-47
received 1.16G stream in 6 seconds (199M/sec)

3. As expected, the manual snapshot is now on backup pool (not showing all snapshots):
Code:
Tabernacle:~$ zfs list -t all | grep Media | column -t
Ark/Media                              2.89T  1.35T  2.65T  /mnt/Ark/Media
. . .
Ark/Media@auto-2023-03-30_00-01-2w     3.87M  -      2.65T  -
Ark/Media@manual-2023-03-30_10-47      0B     -      2.65T  -
ArkBak/Media                           2.89T  1.34T  2.66T  /mnt/ArkBak/Media
. . .
ArkBak/Media@auto-2023-03-30_00-01-2w  2.73M  -      2.65T  -
ArkBak/Media@manual-2023-03-30_10-47   0B     -      2.66T  -

4. After the daily midnight automatic replication, the snapshot is deleted from the target, but not the source. And if I were to replicate it non-incrementally, apparently that would require wiping out all the existing snapshots on target.

How can I make a single snapshot persist indefinitely, while having the automatic snapshot/replication going on as normal?
 
Joined
Jul 3, 2015
Messages
926
I think the snapshot purging is based on the format of the snapshot name. Perhaps make your indefinite snapshot name/format something very different.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
How can I make a single snapshot persist indefinitely, while having the automatic snapshot/replication going on as normal?

In the replication task, explicitly configure a retention policy:

1680276503227.png
 

Glorious1

Guru
Joined
Nov 23, 2014
Messages
1,211
In the replication task, explicitly configure a retention policy:

View attachment 65290
I don't use a replication task for this one-time replication. As shown above, I'm doing it on the command line. Should I be doing it somehow as a one-time task?
When I tried to do that before to replicate the manual snapshot, it gave an error:
"No incremental base on dataset 'Ark/Media' and replication from scratch is not allowed."

If I allow replication from scratch, I think it may delete all existing snapshots on target.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
How about after the manual replication send/receive, create a ZFS bookmark from the received snapshot on the destination? You'll be able to use the bookmark as the source of a zfs send/receive, even if the snapshot's deleted, to recreate the snapshot from scratch.
 

blanchet

Guru
Joined
Apr 17, 2018
Messages
516
Just an idea:

I you replicate from the command line, you can use zfs hold to prevent the snapshot deletion

Man page:

Examples:
 

Glorious1

Guru
Joined
Nov 23, 2014
Messages
1,211
Thanks @Samuel Tai . I've read about bookmarks now. I suspect they would not do what I want, although I don't understand them enough to convince anyone of that. Essentially, as you say, bookmarks can (only) be used as a source of zfs send/receive. You need a snapshot as the incremental target.

So lets say I make a bookmark from a snapshot on my backup pool. Years later, I've deleted most of what I had when I made the bookmark, but I want to get at those files again. So I would do a zfs send receive something like this
Code:
zfs send -i #my-old-bookmark ArkBak/Media@auto-2025 ... | zfs receive ...

Won't this just preserve all the deletions that were made over the years?

I'm thinking about zfs hold. Maybe I could put a hold on a current snapshot so it simply can't be deleted? EDIT: what @blanchet said ;-)
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Won't this just preserve all the deletions that were made over the years?

No, this will recreate the state of the snapshot at the time of the bookmark.
 

Glorious1

Guru
Joined
Nov 23, 2014
Messages
1,211
No, this will recreate the state of the snapshot at the time of the bookmark.
I'm really confused now. Everything I've read seems to say that zfs send -i snap1 snap2 sends the differences between them, and if you then go and look at the destination, it would have the state of snap2.

Also, I assume that, like source snapshots, the bookmark would have to exist on the destination already. What if it doesn't?
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Remember ZFS is a copy on write filesystem. With a bookmark, you're attaching a label to a set of references to a set of blocks in parallel to the references from the blocks to a snapshot. Deleting the snapshot only deletes the references from the snapshot to these blocks, but the references from the bookmark to these blocks are still there.
 

Glorious1

Guru
Joined
Nov 23, 2014
Messages
1,211
Just an idea:

I you replicate from the command line, you can use zfs hold to prevent the snapshot deletion
Good idea, I'm testing that now. I put a hold on a manually replicated snapshot, but there will be an automatic snapshot/replication at midnight. I wonder if the whole thing will fail because it can't delete the snapshot it doesn't like.
 

Glorious1

Guru
Joined
Nov 23, 2014
Messages
1,211
I wonder if the whole thing will fail because it can't delete the snapshot it doesn't like.
Yup, the automatic replication failed because it couldn't destroy the held snapshot.
Code:
cannot receive incremental stream: dataset is busy.
 

Glorious1

Guru
Joined
Nov 23, 2014
Messages
1,211
No, this will recreate the state of the snapshot at the time of the bookmark.
I tested the bookmark approach as best I could, and it behaved as I expected. The data present when the bookmark was created were unrecoverable after the snapshot it was based on was destroyed. Here's what I did to test:
  1. Create datasets test and testbak; create file1 in test.
  2. Snapshot test (@snap1) and replicate to testbak.
  3. Create file2 in test, snapshot (@snap2), incrementally replicate to testbak. Now there are 2 files and 2 snapshots in both datasets.
  4. Bookmark snap2 on testbak (#bm-snap2).
  5. Delete file1 and file2 in test (primary dataset).
  6. Create a third file (file3), snapshot (@snap3), replicate to testbak.
  7. Destroy earlier snapshots (snap1 and snap2) in both datasets.
  8. Replicate from testbak to test, using the bookmark as the source.
Now both datasets have only file3 and snap3. @snap3 contains only file3. file1 and file2 are gone.

I would be happy to learn I have done something wrong. However, the snapshots the bookmark is based on must be deleted for a real test. I already learned that automatic replication will fail if it can't destroy snapshots.

Here are the actual commands:
Code:
1
zfs create Ark/test
zfs create Ark/testbak
cd /mnt/Ark/test
touch file1

2
zfs snapshot Ark/test@snap1
zfs send -v Ark/test@snap1 | sudo zfs receive -F Ark/testbak

3
touch file2
zfs snapshot Ark/test@snap2
zfs send -v -i @snap1 Ark/test@snap2 | sudo zfs receive -F Ark/testbak

zfs list -t snapshot | grep test | column -t
  Ark/test@snap1     99.4K  -  170K  -
  Ark/test@snap2     0B     -  185K  -
  Ark/testbak@snap1  99.4K  -  170K  -
  Ark/testbak@snap2  0B     -  185K  -

ls -l /mnt/Ark/testbak
  total 16
  drwxr-xr-x   2 root  wheel     4B Apr  1 06:49 ./
  drwxr-xr-x  13 root  wheel    13B Apr  1 06:45 ../
  -rw-r--r--   1 root  wheel     0B Apr  1 06:25 file1
  -rw-r--r--   1 root  wheel     0B Apr  1 06:49 file2

4
zfs bookmark Ark/testbak@snap2 Ark/testbak#bm-snap2
zfs list -t bookmark | column -t
  NAME                  USED  AVAIL  REFER  MOUNTPOINT
  Ark/testbak#bm-snap2  -     -      185K   -

5
rm file*

6
touch file3
zfs snapshot Ark/test@snap3
zfs send -v -i @snap2 Ark/test@snap3 | sudo zfs receive -F Ark/testbak

7
zfs destroy Ark/test@snap1%snap2
zfs destroy Ark/testbak@snap1%snap2

8
zfs send -v -i Ark/testbak#bm-snap2 Ark/testbak@snap3 | sudo zfs receive -F Ark/test

ls -l .zfs/snapshot/snap3
  total 1
  drwxr-xr-x  2 root  wheel     3B Apr  1 11:06 ./
  dr-xr-xr-x+ 3 root  wheel     3B Apr  1 11:14 ../
  -rw-r--r--  1 root  wheel     0B Apr  1 11:06 file3
 
Last edited:

Glorious1

Guru
Joined
Nov 23, 2014
Messages
1,211
I also tried making a manual snapshot in the primary dataset called manual-2023-04-02_16-21_20y. Then, in the replication task for that dataset, under "Also include naming schema", I put manual-%Y-%m-%d_%H-%M-2w_20y.
But it just gets ignored during replication.

Now I'm going to try temporarily changing the task's "Snapshot retention policy" to Custom, and set "Snapshot Lifetime" to 20 years. Then change it back to the default after the next replication. No idea if that will work. This stuff is pretty mysterious. And there's no zfs snapshot property related to retention/lifetime, so I'll just have to wait until couple of 2-week cycles go by to see how it reacts.

On the bright side, zfs/TrueNAS seem much less picky about extra snapshots on the primary dataset (send side), so at least I can leave one with a hold there. Just no backup.
 
Top