No snapshots to send for replication task

tonci

Dabbler
Joined
Mar 14, 2013
Messages
18
The client has one old FreeNAS 9 machine that cannot be upgraded. I installed the new TrueNAS12 on new hardware and want to transfer dataset from the old one.
- I set snapshot task up for source dataset on FN9
- I set PULL replication task up on TN12, ssh connection works , source dataset on FN9 is visible/browsable in the task definition, destination is local TN12 dataset
- This task has to be run only once (RUN-NOW)
BUT, .... the task finishes in a moment and no dataset has been transfered .... : No snapshots to send for replication task 'task_1' on dataset 'vol1/backup'
(vol/backup is the source dataset)

Can someone give me a clue what's been done wrong ?

Thank you very much in advance

Best Regards

Tonci
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I sure wish you would have given some hardware details. You might want to check this out:

Forum Guidelines
https://www.ixsystems.com/community/threads/forum-guidelines.45124/

Since you are only doing it once, I would skip trying to setup all these tasks and just go with the command line method. Here is what I would do, if the new system will accommodate it:

I have done this many times both at home and on the systems I manage for work. Last time I did it at work I had four disk shelves (64 drives) connected by external SAS cables to the new serer so I could do this...
If you can temporarily connect both pools to the same system, you can quickly transfer data from one to the other using this method.

First take a snapshot of the pool you want to copy using this command: zfs snapshot -r <poolname>@<snap-name>
The exact command that I used (last time I did this at home) is this: zfs snapshot -r Emily@manual-29Jan2019

Once you have a snapshot, you can use zfs send and receive to send that snapshot to another pool using this command:

zfs send -R <poolname>@<snap-name> | zfs receive -F <new-pool>

The exact command I used to backup my existing pool to a new set of disks:

zfs send -R Emily@manual-29Jan2019 | zfs receive -F Irene

With all the disks connected to the same system (the newer, faster one) the process goes as fast as mechanically possible. The last time I did this at work, I had 450TB of data to copy off the old pool onto the new pool and it only took about a month to finish...

If you can't connect both pools to one system, you can still use zfs send to send to another system over the network.
https://docs.oracle.com/cd/E19253-01/819-5461/gbinw/index.html

Bypassing the automation in the GUI should make this much simpler.
 

tonci

Dabbler
Joined
Mar 14, 2013
Messages
18
I sure wish you would have given some hardware details. You might want to check this out:

Forum Guidelines
https://www.ixsystems.com/community/threads/forum-guidelines.45124/

Since you are only doing it once, I would skip trying to setup all these tasks and just go with the command line method. Here is what I would do, if the new system will accommodate it:

I have done this many times both at home and on the systems I manage for work. Last time I did it at work I had four disk shelves (64 drives) connected by external SAS cables to the new serer so I could do this...
If you can temporarily connect both pools to the same system, you can quickly transfer data from one to the other using this method.

First take a snapshot of the pool you want to copy using this command: zfs snapshot -r <poolname>@<snap-name>
The exact command that I used (last time I did this at home) is this: zfs snapshot -r Emily@manual-29Jan2019

Once you have a snapshot, you can use zfs send and receive to send that snapshot to another pool using this command:

zfs send -R <poolname>@<snap-name> | zfs receive -F <new-pool>

The exact command I used to backup my existing pool to a new set of disks:

zfs send -R Emily@manual-29Jan2019 | zfs receive -F Irene

With all the disks connected to the same system (the newer, faster one) the process goes as fast as mechanically possible. The last time I did this at work, I had 450TB of data to copy off the old pool onto the new pool and it only took about a month to finish...

If you can't connect both pools to one system, you can still use zfs send to send to another system over the network.
https://docs.oracle.com/cd/E19253-01/819-5461/gbinw/index.html

Bypassing the automation in the GUI should make this much simpler.

Chris , thank you very very much ... I completely forgot those "under the hood" activities :)
Just tried zfs send/recv and it works like a charm ... "-i" works too ...
Yes, your're completely right about attaching destination shelves to source server and it's way faster than network , but this was not an option .... even putting "destination disks" and creating dest pool in the source server was not possible ...

My source server is old ibm 3400 with jbod ctrl ( 8 x 4T raid Z2 ) and dest server is new supermicro AMD epyc with 4 x 12T (raidZ2)
this send/recv transport saturates 1G network aprox 60% ... I expected more , but this would do the job also ....

After that I'll play little more with GUI options because ZFS replication is really worth using

Thanks once more for your hint !

BR

Tonci
 

edisondotme

Cadet
Joined
Aug 12, 2019
Messages
5
This is an old thread but I wanted to add a note here in case someone is searching about this in the future.

I simply needed to make sure that the naming schema for the snapshot on the source system agreed with the required naming schema for the replication task. I was making manual snapshots which are named something like manual-[datetimestamp] but the replication task was looking for something like auto-[datetimestamp].

Screenshot from 2023-07-13 12-11-56.png


After I corrected this and re-ran it started working like normal.
 

Marknas2023

Cadet
Joined
Oct 4, 2023
Messages
3
This is an old thread but I wanted to add a note here in case someone is searching about this in the future.

I simply needed to make sure that the naming schema for the snapshot on the source system agreed with the required naming schema for the replication task. I was making manual snapshots which are named something like manual-[datetimestamp] but the replication task was looking for something like auto-[datetimestamp].

View attachment 68315

After I corrected this and re-ran it started working like normal.
Thank you for posting, it helped me finding a solution.
 
Top