How do I set up replication for different sub-datasets?

keboose

Explorer
Joined
Mar 5, 2016
Messages
92
I am going to be re-configuring/updating my remote FreeNAS box, and completely re-building the backups I store there with replication. I would like to double-check with somebody to make sure I'm doing this right, and not messing up my backup datasets in the event I have to restore them.

This is how my main server's pool is set up:
Code:
Drive_Pool/
........../cloud     #(dataset for my Nextcloud instance)
........../iocage    #(jails)
........../jails1    #(still have one jail running 11.1, working on moving it.)
........../temp      #(large media files that don't need to be backued up)

There is also a folder (not a dataset) under the main pool that stores all my user data: Drive_Pool/data.

I have a recursive snapshot task for the four child datasets, and one for the main one, Drive_Pool (which I have set up a NON-recursive snapshot to backup my /data folder.) I have a replication task for each one to my remote server, but I don't know if I have configured them correctly.

The pool on the remote server is Remote_Pool. I replicate the main dataset Drive_Pool to a child dataset I created called Remote_Pool/remote. All of the other snapshots I made (except for /temp,) I replicated one level into the remote dataset, so for example I set Drive_Pool/iocage to sync to Remote_Pool/remote/iocage. Is that the correct procedure?

I ask because, looking at the remote file system, it seems everything is nested within itself. See the screenshot I attached. If I explore the file system via ssh, I see the same thing: /mnt/Remote_Pool/remote/iocage/iocage. This doesn't look right to me. When I re-build this backup, how should I set up the child snapshot/replications to avoid this?
 

Attachments

  • Screenshot_2019-06-05-12-19-22.jpg
    Screenshot_2019-06-05-12-19-22.jpg
    115 KB · Views: 292

keboose

Explorer
Joined
Mar 5, 2016
Messages
92
Did you find a solution for this?
I'm still looking into it. I'm nearly done rebuilding the backup, and most of my data has been replicated... according to the Storage-->Pools menu in the remote system's GUI. At the moment, I can see the folder structure, but none of the files appear in my terminal. Once I figure out why that is, and can confirm that I can restore files, I will update again
 

pro lamer

Guru
Joined
Feb 16, 2018
Messages
626
Will you please provide a better quality screenshot?

Sent from my phone
 

keboose

Explorer
Joined
Mar 5, 2016
Messages
92
Did you find a solution for this?
Here is the setup I've come up with after some experimenting. This took a bit longer than I anticipated because one of my backup disks got some corrupted sectors, and the only way to clear/unallocate them that worked for me was to wipe the drive with 0's and start over.

Anyway, My setup is as follows:

  • On my main NAS, I have a disk pool. It has a lot of stuff in the /data folder (which is not a sub-dataset. Just a folder in the main pool.)
  • I have 4 sub-datasets: warden Jails, IOCage jails, a dataset for my Nextcloud instance, and an 'ingest' dataset that I don't want to replicate (I may do a lot of renaming/sorting, and may not even keep a lot of files that end up there):
main NAS pool.JPG


  • I set up Individual snapshots at different intervals for the main pool (non-recursive), and all 4 sub-datasets (recursive):
snapshot tasks.JPG


  • I now replicate all snapshots (except the 'ingest' dataset) to my other server. ALL tasks replicate to the same pool/dataset (in the case of my remote server, it is remote/sync [the dataset sync in pool remote]):
replication tasks.JPG

replication pool destination.JPG


The data pool on my other NAS looks like this:
remote pool.JPG


At first I was worried by the 'BigNas' dataset listed under sync, but if you look, it only takes up 88kb, whereas the actual dataset sync is over 6TB.

If I go into the list of snapshots on the remote server, all of the snapshots for BigNas have been changed to remote/sync:
snapshot comparison.JPG


If I view a snapshot (by cloning it) of the main dataset, (remote/sync,) it is mounted at /mnt/remote/[sync-auto-snapshot-name]/, and contains my ~6TB of data. It also has mountpoints for my sub-datasets, but they are empty (which is expected, as they were not included in the snapshot.)

Cloning sub-dataset snapshots mounts them in /mnt/remote/sync/[dataset-auto-snapshot-name]/, which is also what I expected.

Overall, I am much happier with the layout of my current backup, as opposed to the mess in the screenshot of my OP. I suspect that was to do with me experimenting with replication settings on my main NAS, as well as not cleaning out the old datasets consistently on the remote NAS.

I am also keeping the number of snapshots down by using the 'rollup' script from zfs-rollup , the command in cron being: python2 /path/to/rollup.py -r -i 6h:4,1d:7,1w:50 BigNas. The -i flag indicating: keep the last four 6-hour snapshots (the smallest snapshot interval I have,) 1 snapshot from each of the last 7 days, and one snapshot from each of the last 50 weeks (though the snapshot tasks themselves clean up before 10 weeks.) Actually looking at the screenshots made me realize I was not using the script correctly (had way too many snapshots,) so the command above is what I replaced the not-working one with. Now I only have 240 snapshots instead of 380.

I'm also using the script 'clearempty' from the same github repo. The command for that is simpler: python2 /path/to/clearempty.py -r BigNas. It simply deletes empty snapshots, the only purpose being to make my Snapshots list cleaner and more compact.
 
Last edited:
Top