Possible to resume large initial snapshot replication, rsync-style?

Status
Not open for further replies.

qqBazz

Dabbler
Joined
Nov 5, 2015
Messages
34
I've got a ~550GB snapshot that I'm attempting to replicate offsite for the first time, and finding that when the job gets interrupted, all progress gets erased and requires restarting. In the last two weeks, I haven't managed to get the whole thing sent, and the multiple daily restarts are getting old. Once I get through this initial replication, the daily recurring delta on this dataset is a totally doable 40-50 MB... but I've got to get the initial snapshot completed, first.

Sending a bunch of files via rsync has always been much more resilient, in my experience, because even interrupted jobs can basically just resume once the network returns. Is there any way that I can do an initial
Code:
zfs send
in some way that'll be able to resume rather than restart?

After doing a little bit of research, I am seeing that perhaps this is coming down the pike in version 10, so ... maybe I just need to be patient, or else do the initial send myself, via the commandline, using the zfs resume options.

As a very last ditch effort, I guess I could buy a hard drive locally, do the initial replication, and then mail it to my destination like some kind of Neanderthal.
 

Anon93873

Dabbler
Joined
Dec 28, 2016
Messages
19
I have no answer for you, but I am dealing with a similar situation. I am responding with hopes that the FreeNAS developers will see this and incorporate a resume feature - perhaps even enabling it on the current "production" FreeNAS 9.10 builds.

In my situation, I seeded the offsite FreeNAS destination via gigabit LAN prior to moving it physically to the offsite destination. These two machines act as offsite backups for each other, for different datasets. Unfortunately, because of the speed of that site's connection, transfers are interrupted every 2-3 days. The ZFS send/receive must then start new.

I would benefit greatly from the ability to resume an interrupted ZFS send.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
resumable send and recv is not yet a feature, it's being worked on. Ideally you split your data up into datasets so your replications are more manageable. Most people would do a local replication first then ship the backup offsite. Your idea of using a HDD is a good idea.
 

cuvy

Dabbler
Joined
Jun 12, 2015
Messages
40
Just do it manually.


You can use the `-s` option of zfs receive which will save a resumable token on the receiving side if the transfer fails. It depends if you are using netcat (nc) or SSH.


On the recv machine (netcat only):

Code:
nc -l <port> | zfs receive -s -v tank/dataset


On the send machine:

Start with the usually send:

Code:
zfs send -v snapshot | nc <host> <port>


Code:
zfs send -v snapshot | ssh ... zfs receive -s -v tank/dataset


If the transfer fails, go on the recv machine and type:

Code:
zfs get all tank/dataset


Get the receive_resume_token and go on the `send` machine:

Code:
zfs send -v -t <token> | nc <host> <port>


Code:
zfs send -v -t <token> | ssh ... zfs receive -s -v tank/dataset


Here you go :)
 

Anon93873

Dabbler
Joined
Dec 28, 2016
Messages
19
Is feature 24137 the entry that we're discussing here? I see multiple entries discussing resumable ZFS. It looks like it's been delayed to version 11.

https://bugs.freenas.org/issues/24137

I may start using rsync for all my offsite replication, and have the destination do its own snapshots. It's inelegant, but I need these transfers to be more tolerant of poor residential links.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Is feature 24137 the entry that we're discussing here? I see multiple entries discussing resumable ZFS. It looks like it's been delayed to version 11.

https://bugs.freenas.org/issues/24137

I may start using rsync for all my offsite replication, and have the destination do its own snapshots. It's inelegant, but I need these transfers to be more tolerant of poor residential links.

We're currently at version 11.0 with 11.1 imminent. It seems to have been delayed to 11.2

Once you go rsync you can't go back to snapshots.

A custom script to use resumable transfers might work too.
 

Anon93873

Dabbler
Joined
Dec 28, 2016
Messages
19
We're currently at version 11.0 with 11.1 imminent. It seems to have been delayed to 11.2

Once you go rsync you can't go back to snapshots.

A custom script to use resumable transfers might work too.

While I'm proficient in manipulating unix environments, I prefer to work within the framework provided by the FreeNAS GUI, since I trust that to survive upgrades. When you recommend a script, are you suggesting I implement a CRON job for this?

Given how this feature has slid to later versions before, I'm not sure I have confidence in it appearing in version 11.2.
 

Anon93873

Dabbler
Joined
Dec 28, 2016
Messages
19
While I'm proficient in manipulating unix environments, I prefer to work within the framework provided by the FreeNAS GUI, since I trust that to survive upgrades. When you recommend a script, are you suggesting I implement a CRON job for this?

Given how this feature has slid to later versions before, I'm not sure I have confidence in it appearing in version 11.2.

I implemented this: I'm sure it worked, but I don't have confidence in FreeNAS' reporting of an rsync job's success or failure. I reverted to a recent ZFS snapshot on the destination prior to implementing rsync, and resumed ZFS replication.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
We're currently at version 11.0 with 11.1 imminent. It seems to have been delayed to 11.2

Once you go rsync you can't go back to snapshots.

A custom script to use resumable transfers might work too.
Are you saying that you prefer to use rsync?
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
No. That was a warning to the previous poster.
 
Status
Not open for further replies.
Top