Snapshot disconnect

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
Good morning,
I wonder if anyone has run into a similar issue and how to deal with it. Specifically, I have an off-site NAS which was sufficiently remote to not get rebooted for a while after a power failure (several months ago). If I am understanding the situation correctly, my snapshot sends now fail because the snapshots that should have been sent to the remote NAS have already been deleted locally due to snapshot expiration. The error I get is:

"YYY" failed: cannot send XXX@auto-2022-03-06_03-00 recursively: snapshot XXX/Time Machine/TimeMachine_SA@auto-2022-03-06_03-00 does not exist warning: cannot send 'XXX@auto-2022-03-06_03-00': backup failed cannot receive: failed to read from stream"​


I presume this error is non-recoverable and I will have to re-backup everything from scratch?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
You could try to transfer the snapshots there manually with zfs send/recv (of course rolling the target back to the point where both sides are the same), but that all gets very messy if there are many snapshots in the middle that are missing.
 
Joined
Oct 22, 2019
Messages
3,641
Bookmarks! Bookmarks! Bookmarks! Bookmarks solves this...

...but there's no GUI option that exposes ZFS bookmarks in TrueNAS.

:frown:

(Technically, you could probably create a script that automates bookmarking.)

---

EDIT: To be more clear: You'd "bookmark" every single snapshot.

As snapshots expire (to save space), their "bookmarks" remain. (Bookmarks take up zero space.)

In case of such a situation, you can force an incremental send/recv using a bookmark on the source's side (in which the snapshot in question no longer exists on the source.)

The bookmark (on the source's side) essentially instructs the destination side: "Hey, I still have a bookmark of an expired/deleted snapshot named #auto-2022-01-31. I see you have on your end the actual snapshot named @auto-2022-01-31. You must use that snapshot (on your side) as the 'base' snapshot for this incremental send. This will take longer than a normal incremental transfer, but it's for an emergency situation, and I would prefer not to have to start all over with a full replication from scratch. I would love to use my own @auto-2022-01-31 as the base for this incremental transfer, but it's been destroyed. I only have this bookmark."

But honestly, the most pragmatic benefit of ZFS bookmarks is when there is a much larger backup pool without the same space constraints as your source pool. The source can have infinite bookmarks without taking up any extra space, while the backup pool has room for more actual snapshots.

---

EDIT 2: With all of that said, not only does TrueNAS not expose bookmarks in the GUI, but even if it did, they would still need to include some sort of sophisticated method of automatically falling back to using a bookmark for an incremental transfer in which the first attempt failed because the source is missing the base snapshot. Otherwise, it's back to the command-line all over again.

---

EDIT 3: As far back as 2014, they never implemented support for -R and -I when using a bookmark as the base snapshot. :oops: They claimed it wasn't due to any technical constraints, but because they didn't have the time to implement it yet. Fast-forward 8 years later, and it appears this still remains the case?

Matthew Arens from 2014:
That's right -- not so much that we decided against it, as didn't have time to implement it.

( . . . )

There's no deep reason -- we just didn't have time and didn't need it for our use case. It would be great to see an extension to this to support -R, -I, etc. The issue is that -R uses an older style of zfs send kernel interface (ioctl) that we didn't want to mess with.

( . . . )

Hm, no that isn't supported. Bookmarks are not implemented as datasets, so we'd have to invent another mechanism for this, but I don't imagine it would be that difficult.
 
Last edited:

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
I just pulled off the band-aid (Yaw-ouch!) and in 127 days or so, the two NAS's will be back in perfect alignment. In the meantime, I've set the 'keep pending snapshots' flag which should avoid the issue in the future. Thank you all for your help.
 
Top