Is cascading replication possible?

Kevo

Dabbler
Joined
Jan 1, 2019
Messages
37
I have two servers at an office which are replicating. So Server A -> Server B. Now I have taken a 3rd server and put it at home and want to replicate Server B -> Server C. My initial attempts seem to have failed. If looks like creating snapshots on Server B so that a replication can be created to server C breaks the replication from A->B.

Is it possible to replicate A->B->C or am I wasting my time trying that. Should I instead setup two replications from A. So that A->B and A->C.

Are there any other options?

I would like to have my main server spend as little time as possible doing things other than file serving since we have the two backup servers with the one at the office doing nothing other than acting as a backup.

TIA
 

Kevo

Dabbler
Joined
Jan 1, 2019
Messages
37
So after watching the iXsystems training video on storage I discovered that zxfer should allow A->B->C replication. Unfortunately it's not quite simple as it has to be installed in a jail to make it persist on the filesystem.

Then while searching to see if anyone is successfully using it I found someone who was struggling to make it work from a jail.

So, has anyone actually used zxfer to do a cascading replication setup?
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
If you create a cron job on Server B, you can perform the operation using CLI. It would work better it you did proceed with a recusrive snapshot in the first place.
This type of work does require a bit of maintenance to make sure all the snapshots are replicated.
 

PhilipS

Contributor
Joined
May 10, 2016
Messages
179
I perform A -> B -> C replication using the GUI (classic) and I have not had any issues with it not working.

This is how I have it setup:
  • "A" has a Replication Task setup on multiple datasets into a parent dataset on "B" (So this parent dataset will contain multiple child datasets from "A") These are set to delete stale snapshots on remote system. This step is likely the source of your problem with the replication failing since you will have snapshots on "B" that "A" won't know about, so it needs to know it is okay to delete these "stale" snapshots.
  • "B" has a Periodic Snapshot Task on the parent dataset that is set to run every 4 weeks (the maximum) with a 1 hour expiration (the minimum). I don't care about this snapshot as it is only to allow the "B" to "C" replication to function. I have it set to be recursive and I don't recall if that was required, but it must be enabled. In my experience, it doesn't stop the replications from working from A.
  • "B" has a Replication Task for the parent dataset set to be recursive to "C". Like magic, this replication runs automatically every time a replication finishes transferring from "A". It is not dependent on "B"s snapshot schedule.
I don't believe it is required to have a "parent" dataset, I just did that to make management easier. It's been working fine for a couple years like this.

Good luck, HTH.
 

felixnas123

Dabbler
Joined
Oct 22, 2019
Messages
25
Hello @PhillipsS, is your setup still working?

i would like to run A and B how you snycron and from B to C replication with long rentention and then from c to d with longer rentention?

do you think that is possible with your settings? my server are run freenas 11.2

Greets
 

PhilipS

Contributor
Joined
May 10, 2016
Messages
179
My setup has "A" deleting stale snapshots on remote systems. Since you desire too have longer retention period on your "B" and "C" systems, you won't want to use that option since it will delete any remote snapshots that don't exist on "A".

The issue then is similar to what the original poster ran into. If a snapshot gets created on "B" (which is part of getting the "B" to "C" function to operate using the GUI), when "A" replication runs, it will see that the latest snapshot on "B" doesn't match what is on "A", in other words, it will see "B" as having a stale snapshot, causing replication to fail.

It may be possible to setup the "B" snapshot schedule like I listed in my post with a short expiration, and schedule your "A" replications to avoid that window, but I have not ever tested that.

Also, I still use the old GUI for replications and snapshots, not sure if anything changed in the new GUI. Also be aware that the snapshot/replication process is getting re-written too, so that may change the way this operates too.
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
Hello @PhillipsS, is your setup still working?

i would like to run A and B how you snycron and from B to C replication with long rentention and then from c to d with longer rentention?

do you think that is possible with your settings? my server are run freenas 11.2

Greets
It would be easier to work the way around where A would have the longest snapshot retention and lower retention snapshots toward C.
But I fail to understand what you will accomplish or gain by doing so.
 

felixnas123

Dabbler
Joined
Oct 22, 2019
Messages
25
Hello @PhillipS and @Apollo thanks for your reply. I understand what youa mean with the break of replication, because Server B has other Snapshots than Server A. I run in this issue!

@Apollo


I want the third server as Backup Server .


FREENASA replication to FREENASB thats are the Backup for Hardware failure for FREENASA

and then i want FREENASB ---- BACKUPS 3 MONTH RETENTION --- FREENASC -- for Backups, so i can rollback a week or month, to come on old DATA

and then later i want from the FREENASC --- BACKUPS 6 MONTH RENTETION ---FREENASD - for older BACKUPS


Is that possible or give it a Better idea?


I hope that have my Idee come over for you.
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
@felixnas123, What you are trying to do isn't possible without some kind of script running on each server.
Snapshot lifetime is dictated at snapshot creation and is handled by ZFS. The lifetime of the snapshot is pre-programmed and ZFS act upon.
When I first explored this similar concept, I went a different route and made a series of recursive snapshots with different lifetime such as 2 weeks, 1 month, 2month, 1 year and on and on. I thought this was doable and reliable.
It turns out if you were to go that way and decided to not keep any snapshot less than one year, this means that between snapshot from one year to the next, you will only be able to see the files that were held by the snapshot from last year and the one held for this year. Every files created or modified after last year snapshot and deleted before this year's snapshot will not be included in the snapshot. This gives you a false sense of security.

Also, I still think you do not fully understand what a snapshot is made off. A snapshot is not a backup of your data. It is the list of pointers to blocks which contain the data, so freeing space by deleting old snapshot is no guaranty of recovering free space.

In your case, either you go with 6 month retention on all the backups, and I would not even bother daisy chain them, but run them in a star configuration. Mind you that could be to much to handle from the main server.


Back to my idea of running a script on each server, there is a script you could use or adapt to do just that, I think.
https://github.com/theninjageek/ninja_snap

I believe you can set a time frame or the number of snapshots to retain and the script should take care of keeping your snapshots based on your requirements.
 

Fredda

Guru
Joined
Jul 9, 2019
Messages
608
Snapshot lifetime is dictated at snapshot creation and is handled by ZFS. The lifetime of the snapshot is pre-programmed and ZFS act upon.
Are you really sure about that? My understanding was always that the deletion of expired snapshots is done by FreeNAS and not the filesystem itself.
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
Are you really sure about that? My understanding was always that the deletion of expired snapshots is done by FreeNAS and not the filesystem itself.
I seem to recall testing this theory by manually creating a snapshot and giving it a name with the same format to automatic snapshots, and it seems the snapshot was never deleted after that I believe.
Beside snapshot is implemented within ZFS so it is sensible to think it is dealing with snapshot handling.
It is possible Freenas is acting up on the deletion process by running the "zfs destroy" command but I doubt.
This also will be caused for security and quality concerns as it will require troubleshooting and maintaining the feature related to Freenas snapshot deletion.
One slip in the wrong direction and you can destroy your entire pool.
 

PhilipS

Contributor
Joined
May 10, 2016
Messages
179
Beside snapshot is implemented within ZFS so it is sensible to think it is dealing with snapshot handling.

Snapshot expiration is a Freenas feature, not ZFS.
 
Top