I use the program Back In Time.
It has a GUI, it is superfast, uses rsync and saves folders and files using hardlinks.
So in the backup folder you have subfolfers with each backup you take in time, and all these folders contain hardliknks,
you can browse regularly like every other folder and from there you can restore or copy back entire folders or just files.
You can use it to make local backups also, like in a USB drive, where you can easily check it before using it to make backups to TrueNAS.
Caution: I am not a computer scientist or professional. I'm just a noob home user and want to share with other what works for me because this subject is frequently asked.
Check if it works correct before using it in production.
Any corrections to my process are welcome.
How I do it.
1. In TrueNAS
- select a user that will SSH to the NAS
- make a home for this user (in a dataset with UNIX permissions) in Users -> Edit -> Home Directory
- Enable SSH in Services and tick Start Automatically
- Choose a Dataset where the backups will be written. For me it worked for a dataset with UNIX permissions. Make owner of the dataset the user that will make the backups. Dataset can have other folders and you can choose/make one that the BackinTime will use.
2. In the PC
- Install Back In Time
- open terminal and type SSH user@ip, user is the TrueNAS user you selected before and the ip the IP of your NAS
- follow instructions (yes and give password of the TrueNAS user)
- run Back In Time -> settings -> Mode : SSH
- in SSH settings -> Host: the TrueNAS ip, User: the Truenas user, path: the path of the folder that backups will be written in the form /mnt/poolname/dataset/folder and leave everything else as is
- hit the green + button (it may be not available if it finds existing public/private SSH key)
- press YES and type TrueNAS user password when asked
- choose folders/files to be backed up
- choose retention rules
- hit OK
I hope I didn't forget anything crucial.
Look also here for detailed description
https://backintime.readthedocs.io/en/latest/index.html
Backups are written very very fast, are browsable like every other folder and smart retention rules can be set. You can have multiple profiles like another for Documents, another for Home minus(Documents, Pictures and Downloads) etc
I will say another time how fast the program works. I don't know how and why but it is far faster from any other backup program I used in NAS or HDD, in Linux or Windows. After the first backups I go to browse the files to believe that the backup finished right.
In a few months I used this in a test machine I made I didn't find any problems.
My concern is any issue with hard links and ZFS but I didn't but I'm not aware of any.
I even made snapshot replication of the backup dataset to another pool in a USB HDD and could read my data with versions from there just fine.
I'm not sure yet if versioning is necessary given the snapshot capabilities of ZFS. It may be useful for different retention requirements inside the same dataset or as an additional layer of security. I'm thinking as a noob, I like to achieve the same thing with different ways as I don't master these techniques well.
It has a GUI, it is superfast, uses rsync and saves folders and files using hardlinks.
So in the backup folder you have subfolfers with each backup you take in time, and all these folders contain hardliknks,
you can browse regularly like every other folder and from there you can restore or copy back entire folders or just files.
You can use it to make local backups also, like in a USB drive, where you can easily check it before using it to make backups to TrueNAS.
Caution: I am not a computer scientist or professional. I'm just a noob home user and want to share with other what works for me because this subject is frequently asked.
Check if it works correct before using it in production.
Any corrections to my process are welcome.
How I do it.
1. In TrueNAS
- select a user that will SSH to the NAS
- make a home for this user (in a dataset with UNIX permissions) in Users -> Edit -> Home Directory
- Enable SSH in Services and tick Start Automatically
- Choose a Dataset where the backups will be written. For me it worked for a dataset with UNIX permissions. Make owner of the dataset the user that will make the backups. Dataset can have other folders and you can choose/make one that the BackinTime will use.
2. In the PC
- Install Back In Time
- open terminal and type SSH user@ip, user is the TrueNAS user you selected before and the ip the IP of your NAS
- follow instructions (yes and give password of the TrueNAS user)
- run Back In Time -> settings -> Mode : SSH
- in SSH settings -> Host: the TrueNAS ip, User: the Truenas user, path: the path of the folder that backups will be written in the form /mnt/poolname/dataset/folder and leave everything else as is
- hit the green + button (it may be not available if it finds existing public/private SSH key)
- press YES and type TrueNAS user password when asked
- choose folders/files to be backed up
- choose retention rules
- hit OK
I hope I didn't forget anything crucial.
Look also here for detailed description
https://backintime.readthedocs.io/en/latest/index.html
Backups are written very very fast, are browsable like every other folder and smart retention rules can be set. You can have multiple profiles like another for Documents, another for Home minus(Documents, Pictures and Downloads) etc
I will say another time how fast the program works. I don't know how and why but it is far faster from any other backup program I used in NAS or HDD, in Linux or Windows. After the first backups I go to browse the files to believe that the backup finished right.
In a few months I used this in a test machine I made I didn't find any problems.
My concern is any issue with hard links and ZFS but I didn't but I'm not aware of any.
I even made snapshot replication of the backup dataset to another pool in a USB HDD and could read my data with versions from there just fine.
I'm not sure yet if versioning is necessary given the snapshot capabilities of ZFS. It may be useful for different retention requirements inside the same dataset or as an additional layer of security. I'm thinking as a noob, I like to achieve the same thing with different ways as I don't master these techniques well.
Last edited: