11.3-U3.1 - Replication Task - Entire dataset keeps having to be resent

MikeyG

Patron
Joined
Dec 8, 2017
Messages
442
I replicate my datasets to a remote server running freenas (11.1-U5). The snapshot tasks are set to NOT allow taking empty snapshots, and the lifetime is a week.

Some of my datasets aren't very active, and so even though the snapshot task runs every couple of hours, no new snapshots are taken because no new data has been written. For those datasets, when the snapshot expires and another is taken, that new snapshot ends up being the only snapshot for the dataset, and gets sent in its entirety to the remote server. So if the dataset is 500GB, and lifetime is a week, then once per week there is a 500GB transfer to the remote server.

Obviously this is less than ideal to have such an enormous amount of data sent when no new data has been written.

What I'm guessing is happening is that the snapshot task is generating a new snapshot, and removing the old one because it has expired, and then when the replication task runs, there are no previous snapshots to run a comparison against, and so it can't generate an incremental stream to replicate. Therefore it just starts the whole thing over again by replicating the snapshot it does have, which includes the entire dataset.

I assume that if I check "Allow taking empty snapshots" then that would fix this problem as there would be a constant stream of snapshots for the replication task to compare against. However, I wonder if there is another way to address this.

Any advice or ideas?

*EDIT* - I just realized that I can UN-check "Replicate from scratch if incremental is not possible" and that might prevent the large transfers. However, I think I had that checked originally on the replication task, and it was causing errors about the replication not being possible. Even without an error, I think it would just silently fail to replicate any data at all after the last snapshot expired.
 
Last edited:

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
I would just allow empty snapshots so you don't go more than the snapshot lifetime before trying the next incremental.

I suspect that the incremental is failing because the previous snapshot is missing... BTW, what happens to the previous snapshot where something changed after the week? (maybe it's not there anymore either, so can't be used for an incremental).

Generally, an empty snapshot costs you almost nothing, so I see no reason to avoid them.
 

mpfusion

Contributor
Joined
Jan 6, 2014
Messages
198
It would be nice if you could file a bug report (and post the issue number here). That behaviour is not what the user expects. That problem should be solved in freenas.
 

MikeyG

Patron
Joined
Dec 8, 2017
Messages
442
@sretalla I agree that the incremental is probably failing due to the previous snapshot missing. I'm not sure I understand the question around the previous snapshot when something changes after a week. After a week, the previous snapshot is removed due to it's retention being a week. I do have retention on the replication target set to 2 weeks, so there would be at least one snapshot on the target, but I think the incremental data is calculated based on local snapshots.

Yes, the empty snapshots could be re-enabled. I just like the new feature as it lets me look at lists of snapshots and see when data has been written. It's nice when figuring out which one would have to be cloned or looked at for data restoration.

I'm also wondering if "Hold pending snapshots" is designed to help with this. Will test that before submitting a bug report.
 
Top