SOLVED Incremental backup question - backup server snapshot way too out-dated

Status
Not open for further replies.

hell0un1verse

Dabbler
Joined
Jun 14, 2016
Messages
15
Before I had been doing incremental backup between my primary freenas server and backup freenas server, the backup server went down late last year due to power supply failure, now the backup server is finally back up, I'd like to continue doing incremental backup using snapshots, however the oldest snapshot on my primary server is newer than latest snapshot on the backup, since only 3 months worth of daily snapshots have been kept on primary:

Earliest snapshot on primary as of today: tank@auto-20170110.0015-3m
Latest snapshot on backup: tank-bu@auto-20161115.0015-3m

I am not sure if I can still do the incremental backup like this, since primary lost snapshot tank@auto-201601115.0015-3m long time ago:

zfs send -Rvi tank@auto-201601115.0015-3m tank@auto-20170410.0015-3m | ssh root@10.10.57.8 zfs recv -Fdvn tank-bu

Or can I do this? Will I lose deltas between 2016/11/15 and 2017/01/10?

zfs send -Rvi tank@auto-20170110.0015-3m tank@auto-20170410.0015-3m | ssh root@10.10.57.8 zfs recv -Fdvn tank-bu

A further question here is, is there a better way to do a straight mirroring between the two zfs systems for backup purpose?

Your help is much appreciated.
 

PhilipS

Contributor
Joined
May 10, 2016
Messages
179
Unfortunately, I'm pretty sure you will need to send the whole enchilada back over.

Not sure why the backup was down for so long, but if you anticipate this in the future, you could setup a snapshot with a longer expiration time (1 year?) that you run monthly or something.
 

hell0un1verse

Dabbler
Joined
Jun 14, 2016
Messages
15
Thanks for the help, Philip.

How do I back up the whole thing though? I started with cloning the primary zpool (raidz2) onto two additional hard drives (another mirrored raidz pool) on the same machine, later on I moved the two HD's to a standalone server, which is the backup server now. Should I wipe out the backup and do a clone?

Yes, it's been a little too long leaving the backup server down, only until recently I had the chance to replace the PSU. I guess having daily snapshot and keep 3 months of them didn't help for the situation now.
 

PhilipS

Contributor
Joined
May 10, 2016
Messages
179
Sounds like you either have the machines physically close, or you can transport drives between machines. I would replicate (not clone) to the backup drives. (Are you using the built in replication tasks?)

If the machines are local to each other, then setup a replication task on the primary and let it sync and be done with it.

If the backup machine is remote over a slow link, then replicate to a local pool that you can then detach and install in the backup system as it's pool. (Or you can install the disk and replicate that transport disk onto the backup systems pool) Then setup the replication task to target the remote server and it will pick up where it left off.

The replication tasks in freenas check which snapshots are on the replication target and figure out what needs to be done to bring the target up to date - so you can move pools between servers and reconfigure the replication task and it picks up like nothing happened. This is how I initialized my systems that are in different geographic areas.

HTH, if you need more detailed steps, let me know.
 

hell0un1verse

Dabbler
Joined
Jun 14, 2016
Messages
15
I tried replicating, the backup volume was at 60% full before, however the replication process quickly filled up the space, so I had to kill the process to stop replication. Not sure if it's because of number of snapshots I had on the source volume, there are 3 months of daily snapshots, which is close to 100 of them. Now I guess I need to clean up the destination and maybe delete some old snapshots from the source before doing the replication again?

What's the best way to clean up the volume? Can I just do an "rm -rf *" from root of the volume?
 

PhilipS

Contributor
Joined
May 10, 2016
Messages
179
If you don't care about the data on the remote dataset, you could destroy the dataset and recreate it. Also, if you don't need to replicate every snapshot, you could manually replicate the latest snapshot before setting up the replication task. The replication task should only replicate snapshots that occur after the one you replicated manually.
 

hell0un1verse

Dabbler
Joined
Jun 14, 2016
Messages
15
Thanks Philip.

I did what you suggested, now it's all replicated correctly. I wonder what the difference is between a replication and incremental backup using snapshots like I did before.

Couple things I didn't know:

1. Need periodic snapshot "enabled" for periodic replication to trigger.
2. Destination for replication has to be root of the volume, say you are replicating tank/media, the destination has to be specified as tank-backup, not tank-backup/media. I initially specified tank-backup/media, and ended up with duplicated folder: /mnt/tank-backup/media/media/...

Thanks a gain for the help.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Status
Not open for further replies.
Top