Replicated Snapshots confusion and possible permissions bug

HarambeLives

Contributor
Joined
Jul 19, 2021
Messages
153
I was excited to see I could set a custom retention for replication snapshots

I went ahead and setup a local Dataset with with snapshots up to 2 weeks, and then replicated it to another TrueNAS system and set the retention to 3 years. But now the question comes, how on earth do I get to those older snapshots?

When I click restore on the replication task, it wants to restore the entire thing. Is there no way to just restore a single snapshot?

Next question is related to permissions. To my surprise, its gone ahead and seemingly thrown a set of random users in the ACL. Currently I'm thinking I've found a bug. When I have unlocked the replicated data set on the second NAS, its given it user permissions from users that don't even exist on the primary NAS, and when I tried a second folder it complaint that it couldn't find used id 1007 (Because this second NAS only has a few users). So its trying to just match up the users via number, not name. This seems like an oversight, right?

My next question, is can I edit the permissions on the remote system without affecting the incoming replication tasks? Otherwise users who have no business seeing this data will be able to, as long as there is a replication

I thought about just locking the dataset again, but there doesn't appear to be an option for that
 

HarambeLives

Contributor
Joined
Jul 19, 2021
Messages
153
Arg! This is frustrating. Half of my decision to purchase extra hardware was around the replication. So far I cannot find a way to get it to let me actually re-lock the dataset, and editing the permissions is not possible as its a read-only dataset

I tried checking the box for encryption in the replication task, but all I get is:

Replication "Data > NAS02 Replication Task" failed: cannot receive new filesystem stream: encryption property 'encryption' cannot be set or excluded for raw streams. Broken pipe..

I think the only way I will be able to get around this is to re-create all my users, but with matching GID's. That also means my plan of replicating snapshots to a friends system is completely out the window, as there would be no way to re-lock them
 
Last edited:
Joined
Oct 22, 2019
Messages
3,641
I think the only way I will be able to get around this is to re-create all my users, but with matching GID's.
Unfortunately, that's how Unix permissions work, unless someone knows of a more advanced means of carrying over the "same" persmissions when the user IDs (uid) don't match on the two systems. (How come you need access to the filesystem on your friend's server. Are you replicating snapshots to their server as backups of your datasets? If you ever need to restore from such backups, you should be fine as the uid's would match your usernames?)

That also means my plan of replicating snapshots to a friends system is completely out the window, as there would be no way to re-lock them
If the datasets are already encrypted locally, you needn't "set" encryption when you send them over, as you can preserve the properties (actually "everything") by doing a raw send replication without sending any key to the destination. They will be locked at the destination, and they don't need to be unlocked to do incremental replications.
 

HarambeLives

Contributor
Joined
Jul 19, 2021
Messages
153
I went through my users today and re-created them 100 numbers up to get them all in a line, now I've learned that TrueNAS does not make the UID and GID match, so now my UID's are all in line, but the GID's are all mixed up o_O

Hopefully in the future they could improve how all this works, what a mess!

I would need access if I had to restore a single file for whatever reason, to my knowledge the best (only?) way to grab a single file/folder is to either mount it and grab it on the target, or bring back the entire replication to the source system. If there is a 4TB folder replicated and I need a single 1MB file, seems a little crazy to bring back the whole thing
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
The answer here is to use a directory service of some sort... Ala: LDAP...
 

HarambeLives

Contributor
Joined
Jul 19, 2021
Messages
153
Yep, I've thought about it. But for such a basic home install, I don't want to add more complication. But maybe you are right and I should just bite the bullet...

I just re-organized the GID's too, so now I'm completely in sync, all UID's match GID's, and both systems match. I'd love it if all the videos talking about setup for replication would specify the user accounts really should match to make it a good experience

I've added a passphrase to the encryption to the main system, and I've placed all the replications into a Replications dataset which is an SMB share. So now I can go to \\NAS02\Replication and as long as its unlocked, everything is there

I do wish that the file history would show up for the replicated data, thats what I need to work on next. There must be a way
 

HarambeLives

Contributor
Joined
Jul 19, 2021
Messages
153
It looks like if I share the specific replicated dataset, it will show the snaps. Pretty cool that when you lock the dataset, the share disappears but is still ready to go in the SMB share list, so as you unlock it, it shows up. Pretty cool
 

HarambeLives

Contributor
Joined
Jul 19, 2021
Messages
153
I guess I may as well ask here instead of making a new thread

For my important data I have the following

Snaps every 15 Mins which expire after 8 hours
Snaps every 1 Hour which expire after 2 days
Snaps every 1 Day which expire after 2 weeks
Snaps every 1 Week which expire after 12 months

Currently I have the replication task to follow what the source snaps do. I thought about extending the history to 3 years, but then will it keep those 15 min snaps for 3 years also? That will end up with a LOT of snapshots!

In an ideal world, I'd like the same retention but the 1 Week snapshots, which would be retained for 3 years. Is there any way to do this?
 
Joined
Oct 22, 2019
Messages
3,641
I thought about extending the history to 3 years, but then will it keep those 15 min snaps for 3 years also?

If you're using a different naming schema per snapshot task? You should be fine. (e.g, auto-recent-, auto-hourly-, auto-daily-, auto-weekly-)

If you're using the same naming schema (auto-) for all snapshot tasks? Zettarepl "tries" to be smart, and should properly destroy/skip the correct snapshots based on expiration dates + which timing-interval pattern they belong + what other tasks they belong to.

I'm very wary of this second method, as explained in a previous thread, and I have 100% confirmed the destruction of longterm snapshots that shared the same name as another task simply by temporarily "pausing" one of the tasks. Yikes! :eek: Because zettarepl is based on parsing names, you might as well play it safer and use unique (and identifiable) names per task.

Not only is it safer, and not only does it not consume extra space, but it makes management much easier.

For example, if you want to view a list of all your weekly snapshots? Type auto-weekly- into the filter search box under the Snapshots page. If you want to view a list of all your daily snapshots? Type auto-daily- into the filter search box under the Snapshots page. You can't do this if all your snapshot tasks for the same dataset use the same naming schema.

As it stands now, since TrueNAS does not (officially) support "smart" or automatic pruning, I found this to be the most sane method of managing and preserving snapshots.

EDIT: Keep in mind that if you rename your snapshot task after it already created snapshots under the old name, those previously-created snapshots will either be orphaned (forever stale, never expire) or destroyed by one of your other snapshot tasks if their names overlap.
 
Last edited:

HarambeLives

Contributor
Joined
Jul 19, 2021
Messages
153
EDIT: Keep in mind that if you rename your snapshot task after it already created snapshots under the old name, those previously-created snapshots will either be orphaned (forever stale, never expire) or destroyed by one of your other snapshot tasks if their names overlap.

Uh oh, on the replicated side or locally too? I've renamed a lot!
 
Joined
Oct 22, 2019
Messages
3,641
Uh oh, on the replicated side or locally too? I've renamed a lot!
I believe both, since the Replication Task is responsible for honoring the source's (local) expirations used during the replication / cleanup process. Unless you override it in the settings.

It's not too late to rename the previously-created snapshots to the new naming schema, using a combination of zfs rename, xargs, and sed.
 

HarambeLives

Contributor
Joined
Jul 19, 2021
Messages
153
The local ones should still be in the list in the UI? I just went and manually deleted the few I had in there

The replication was easily solved by just destroying it and starting over
 
Joined
Oct 22, 2019
Messages
3,641
In that case it sounds like it's not too big of a problem. I was under the assumption you already made many snapshots. If starting over is acceptable, then it sounds like the cleanest solution. :smile:
 
Top