How to rsync a folder between my FreeNAS Corral and my Ubuntu box?

Status
Not open for further replies.
Joined
Apr 2, 2017
Messages
4
Hi Awesome FreeNAS forum guys, I love my FreeNAS... I am new to all these storage things, I want to sync data b/w my FreeNAS and ubuntu machine any doc really helpful.

hardware: custom built AMD processor, Asus motherboard, and 32gb ram.
version: FreeNAS corral.

I did start a rsyncd and tried this
rsync -rzhv . xxx@192.168.1.xxxx

sent 144.11M bytes received 2.61K bytes 13.73M bytes/sec

total size is 144.57M speedup is 1.00

but don't know where it stored, how to access those again.

any detailed document is really helpful.


Thanks,
Karthik.
 

melloa

Wizard
Joined
May 22, 2016
Messages
1,749
version: FreeNAS corral.

Corral has been discontinued. I'd change to a stable as soon as possible.

rsync should use between FreeNAS and Linux without any problem. The only issue I had was with attributes, as Corral has "optimized" them and rsync couldn't copy with -a for instance.

This is what I do on my end:

Code:
#!/bin/sh
cd /mnt/raid
rsync -rltzuv root@10.10.10.200:/mnt/raid/cifs ./
 

scwst

Explorer
Joined
Sep 23, 2016
Messages
59
I have a setup where I back up the users' home partitions on the Ubuntu machine to my FreeNAS, which takes care of snapshots (for the record, I also back up to external hard drives). The short version of what I did is:
  • Create user accounts on the FreeNAS box with the same UID/GID numbers as on the Ubuntu machine (I need this for NFS stuff anyway).
  • On the Ubuntu machine, create ssh public keys and copy & paste them to their FreeNAS counterparts.
  • On the FreeNAS, create "home" datasets for each user - say, tank/h_user1
  • Create sub-datasets for each user for backups, as tank/h_user1/bku_ubuntu Remember to set the ownership and permissions of all these datasets to the individual users.
(Why the extra sub-dataset? Well, in my first version, I just copied stuff from Ubuntu's /home/user1 to the tank/h_user1 - including the ~/.ssh folder, of course. That led to all sorts of strange and curious errors, because I was overwriting the FreeNAS ssh version. Oops. Also, some users keep their Google Takeout stuff in tank/h_user1/takeout which is set to compression=off etc.)
  • Since the Ubuntu machine has /home on ZFS as well, I originally tried the zfs send/receive replication thingie. Worked fine, very easy to set up, but it really transfers everything. I don't want to backup the stuff in ~/.cache though, and all those gigabytes of games in the users's Steam folders at ~/.steam I can just download from the cloud if I ever have to.
  • So instead, local cron jobs on the Ubuntu machine rsync everything over to the FreeNAS once a day. The actual work is done with a shell script in the users' home directory that cron calls (remember, cron uses a different shell, so your bash rsync command won't work the way it did from the command line). The actual command is something like this:
Code:
rsync -az --delete --exclude={.cache,.steam} /home/user1/ user1@freenas:/mnt/tank/h_user1/bku_ubuntu


(For those not familiar with rsync, -a is "archive" which keeps permissions and ownership and stuff; -z is for compressed sending; --delete means we get rid of files on the FreeNAS that have been deleted on the Ubuntu machine; --exclude is a list of folders or files we don't want to copy. use -v "verbose" and -n "dryrun"/"noop" for testing)
  • On the FreeNAS, snapshots are made of the sub-datasets once a day, and kept for a few months before being deleted.
One reason I chose this setup is I get away with nobody requiring root privileges, and I can choose how often each users gets snapshot etc.

Note that this is from Ubuntu --> FreeNAS only. If you want to sync them in the other direction, you'll have to try something different.
 

melloa

Wizard
Joined
May 22, 2016
Messages
1,749
@scwst

Thank you for this post. The use of -rltzuv on Corral is due to weird permissions that won't copy with -a. Just in case some one still running it and gets into trouble :D
 
Status
Not open for further replies.
Top