Cold storage backup/disk rotation

qwertymodo

Contributor
Joined
Apr 7, 2014
Messages
144
I currently have a FreeNAS box used for storing backups, and would like to add an additional redundancy with cold storage backups. I have a few open hot-swap bays in the machine, so I was envisioning the process something like this:

*Have 2-3 disks large enough to each hold a full copy of all data I wish to back up (not necessarily the entire contents of the main disk array, but as large as/larger than the allocated size of the relevant datasets), in hot-swap caddies
*Insert one of the disks
*Automatically mount the disk
*Automatically perform a (preferably incremental) backup of the data
*Automatically unmount the disk when done
*Optionally send an email indicating that the backup is complete
*Rotate the disk daily/weekly/whenever, repeat the process

So, now I need to work out the actual details of how to go about this. I don't think there is any functionality already built into FreeNAS to do this, so I'm guessing it would probably require either installing additional software, or possibly even custom scripting. I had a few thoughts that I'd like to get feedback on:

Filesystem: ZFS seems like it could have some beneficial features in this use case, such as built-in compression and snapshots which could be used for the incremental backups. Having the filesystem-level snapshots would mean I could potentially simplify the backup process by just using something like rsync. The downside, of course, is the lack of redundancy of a single-drive pool which could lead to catastrophic data loss from metadata corruption. If ZFS would NOT be recommended in a situation like this, then I'm guessing that ext4 would be the next best choice? Sacrificing the FS snapshots would mean I would need another option for incremental backups.

Software: As I already touched on, the actual backup software could potentially be as simple as using rsync, but are there better options out there? If I do use ZFS, is there anything built into ZFS for replicating a dataset to another pool that might work in a use case like this? Or is there a better option already supported by FreeNAS that I'm overlooking? Or 3rd-party software suggestions?

Automatic un/mounting: Is there anything in FreeNAS that might either help with or hinder the ability to automate the mounting and unmounting process? If nothing else, I can just schedule a cron job to scan for unmounted drives and mount them, and have the backup process unmount the drive when it's done (as well as somehow flagging the cron job so it knows not to re-mount the same drive).

I'm sure I'm not the first person to go down this road, so I'm hoping somebody can provide the benefit of experience.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,600
When I built up my FreeNAS, I planned on external eSATA drive for backups. Eventually I documented the process here;

How to: Backup to local disks

Next, all metadata in ZFS is redundant, even on a single disk. And critical metadata has even a higher level of redundancy. Using "copies=2" causes both data and metadata to increase in redundancy, (and of course use more storage). I'm not suggesting "copies=2", just pointing out a design of ZFS in regards to metadata redundancy.

If I remember correctly, FreeNAS does not support Ext4 file system, only Ext2/3.

For my backups, (documented in the How to link), I used ZFS, with a combination of compression, Rsync & snapshots. Thus, my backup disk has multiple backups on it. When the disk starts to get too full, I'll "zfs destroy" the oldest snapshot.

My procedure is purely manual. I could have automated it, but I only do backups once a month. So no real need. (Well, the backup script, which also creates a log file, is, well, scripted... and attached to the How to.)
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
I've set up a single-disk, rsync-based backup scheme, using a pair of 6TB disks. The two disks are identically configured and swapped out every Saturday, with one disk on-line and the other stored in my fireproof gunsafe.

All of my data adds up to less than 3TB, so a single 6TB disk is plenty of space for my situation. With multiple drive bays available you can use more disks, copying selected datasets to separate destinations as needed.

The hardest part was setting up the two disks. These both comprise a ZFS pool (named 'dozer') with datasets matching the layout on my system volume (named 'tank'). I run a nightly chron job shell script that executes rsync for each dataset, like so: rsync ${rsoptions} /mnt/tank/$dataset/ /mnt/dozer/$dataset

Note that your rsync options and volume datasets will differ from mine. The options I use are -rltgoDhv --delete-during --inplace --progress --log-file=${logfile}

Early Saturday morning another chron job scrubs the backup pool thus: zpool scrub dozer

My Saturday routine is to:
  • Use Detach Volume to unmount the current dozer volume, remove the disk, and store it in my safe.
  • Fetch the second drive from the safe and mount it using Import Volume.
  • Modify my two SMART tasks to include the newly mounted disk in my short and long SMART tests. These run nightly and weekly, respectively.
If I miss these chores on a Saturday, it's no big deal -- the dozer volume will continue to be updated each night.

I admit this is mind-numbingly simple, and includes manual procedures. But it works, and I get a warm and fuzzy feeling knowing that a recent backup of my data is stored in a fireproof location.

Hope this helps.
 

mahaha

Cadet
Joined
Feb 3, 2020
Messages
8
I admit this is mind-numbingly simple, and includes manual procedures. But it works, and I get a warm and fuzzy feeling knowing that a recent backup of my data is stored in a fireproof location.

Hi Spearfoot, hi all,

sorry to revive this old thread: my desired setup is quite comparable, but for me there is information missing which I was hoping you could complement.

My FreeNAS has four HDDs mirrored with ZFS. My former NAS had two disks in a RAID1 and when I lost one of the disks (still happened three times in 7 years), I quite felt uncomfortable to have all my data only on one disk (beside my regular backup of course). Many photos and so on, you know. So I planned my new setup to always have three disks online and the fourth disk stored separately.

Your approach, Spearfoot, sounds like an approach for me as well, however, there are some questions for me

1) do you always have a degraded array in your NAS? When I remove one of the disks the array stays degraded until I re-attach the disk
2) will the re-inserted disk automatically resync?
3) does this setup work in some way with encrypted pools?

Looking forward for your answer and again: I hope that the "revival" of this old thread is okay

Martin
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Hi Spearfoot, hi all,

sorry to revive this old thread: my desired setup is quite comparable, but for me there is information missing which I was hoping you could complement.

My FreeNAS has four HDDs mirrored with ZFS. My former NAS had two disks in a RAID1 and when I lost one of the disks (still happened three times in 7 years), I quite felt uncomfortable to have all my data only on one disk (beside my regular backup of course). Many photos and so on, you know. So I planned my new setup to always have three disks online and the fourth disk stored separately.

Your approach, Spearfoot, sounds like an approach for me as well, however, there are some questions for me

1) do you always have a degraded array in your NAS? When I remove one of the disks the array stays degraded until I re-attach the disk
2) will the re-inserted disk automatically resync?
3) does this setup work in some way with encrypted pools?

Looking forward for your answer and again: I hope that the "revival" of this old thread is okay

Martin
My NAS array is never degraded... I didn't explain my setup very clearly, but there are two separate pools involved.

The data on my main pool -- named 'tank' -- amounts to a little over 3TB in size. I've created a second pool -- named 'dozer' -- which is made up of a single 6TB disk. My data easily fits on a disk of this size, you may need to use a larger disk, depending on your requirements. I use replication to back up my main pool ('tank') to this single-disk pool ('dozer').

I set up an additional disk with exactly the same pool layout as the first 'dozer' disk. This lets me rotate these two disks between my safe and my NAS system -- using the manual steps I described above -- so that I always have an online backup of my primary pool along with an offline backup in my safe.
 

mahaha

Cadet
Joined
Feb 3, 2020
Messages
8
Ah, okay, I got your setup.

If I should start a new thread, please let me know, because imo, my question fits perfectly in here:

Is there a way to achieve a setup, where one disk rotates from the data pool (3 active disks and 1 cold storage) without degrading the pool?

Bonus question: is there a way to achieve the same with encrypted disks? This still seems to be a problem

Again: thank you for your kind answers!

Regards
Martin
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Ah, okay, I got your setup.

If I should start a new thread, please let me know, because imo, my question fits perfectly in here:

Is there a way to achieve a setup, where one disk rotates from the data pool (3 active disks and 1 cold storage) without degrading the pool?

Bonus question: is there a way to achieve the same with encrypted disks? This still seems to be a problem

Again: thank you for your kind answers!

Regards
Martin
What do you hope to gain by rotating pool member disks?
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,449
Hi Spearfoot, hi all,

sorry to revive this old thread: my desired setup is quite comparable, but for me there is information missing which I was hoping you could complement.

My FreeNAS has four HDDs mirrored with ZFS. My former NAS had two disks in a RAID1 and when I lost one of the disks (still happened three times in 7 years), I quite felt uncomfortable to have all my data only on one disk (beside my regular backup of course). Many photos and so on, you know. So I planned my new setup to always have three disks online and the fourth disk stored separately.

Your approach, Spearfoot, sounds like an approach for me as well, however, there are some questions for me

1) do you always have a degraded array in your NAS? When I remove one of the disks the array stays degraded until I re-attach the disk
2) will the re-inserted disk automatically resync?
3) does this setup work in some way with encrypted pools?

Looking forward for your answer and again: I hope that the "revival" of this old thread is okay

Martin
You misunderstand the process of cold storage with disk rotation.
In your case (4 HDD as mirror), you never touch your pool, you never remove or replace any of the disk to implement cold storage disk rotation. The terminology for disk rotation is actually a misnomer in this particular context.
What you want to do is Pool/Volume rotation on a backup system.

You main 4 HDD mirrored pool is always ON and always left untouched.
What you need is to create a new pool/volume out of a single disk or better yet a few disk which provide redundancy.
But because you are talking about cold storage with rotation, this implies you will have more than one backup pool/volume.

For this process to be more reliable is to have your main Freenas (the one with the 4 HDD mirror) replicate over to another Freenas box used to handle the pool/volume rotation.

The idea is that the 4 HDD mirrored Freenas never gets turned off but will continuously replicate to the backup Freenas box over replication.

When you create your first backup, you will do it on a new Pool/Volume on the backup Freenas box. The pool/volume can be encrypted if you want and you will start replication on the pool/volume. Once replication has completed, you can power off your backup Freenas and place the newly created Pool/volume (all the disk which are part of the pool) in cold storage, such as a safety deposit box at your bank, or at a relative location.

Then you create another pool/volume with another set of drives and you repeat the process as above.
Ideally, you want one of the cold storage to be always receiving replication as to prevent loosing weeks or wonth of data in the event of failure of the main Freenas box.

The idea behind cold storage is to help prevent losing all of your data in the event the active backup or main Freenas fail.

Does it make sense to you?
 

mahaha

Cadet
Joined
Feb 3, 2020
Messages
8
Now it's getting interesting :cool:

New idea from you guys, interesting! Let me first try to explain my idea and then switch over to what I understood from your proposed solution.

What do you hope to gain by rotating pool member disks?

The disks are all mirrored. My thoughts:
2 disk setup => only one left which can also fail
3 disk setup => two disks left, chance that two more will fail is not very high imo.

So a 3 disk setup it is.

Check.

Now the idea of cold storage (maybe "cold storage" is wrong in this context). Lets number my disks, 1, 2, 3, 4. I want to have one additional disk, no. 4. My case has 4 slots with hotswap cases, 1,2,3 are running in a ZFS mirrored setup.

Week 1: I plug in no. 4 and remove no 1 and take no 1 somewhere else. All data from my NAS are stored there and are physically separated (fire, theft etc.)
Week 2: I plug in no. 1, remove no. 2 and take no 2 somewhere else.
Week 3: I plug in no 2, remove no 3 and take no 3 somewhere else
Week 4: I plug in no 3, remove no 4 and take no 4 somewhere else

And so on.

In that way, in a disaster scenario I would always have one disk separated and all of my data available.

What you want to do is Pool/Volume rotation on a backup system.
As I don't have a spare system, that would not be the desired setup.

But now I've got an interesting idea from you, which brings other questions :)

- split my existing pool "lets name it "main") into 1,2,3 and another pool (lets name it "backup") with one disk
- script-sync from "main" to "backup"

The biggest advantage is that I can keep "main" encrypted, which is quite cool.

However, I see two points:
1) "backup" only consists of one disk. How should I remove it from the system without destroying the pool? TBH it is the first time I work with ZFS, so many questions might be ridiculous. Let me learn, please :)
2) I don't really rotate the disks anymore, so my removed disk will always be the same. That would take away some sort of "random factor" when it comes to a hardware failure, because the weakest point would be my disk from the "backup" pool as it has a prominent function.

Let me add: this setup here is additional to my regular incremental backup I do fortnightly.

I really appreciate that you take so much of your time, thank you!

Kind regards,
Martin
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,449
Hi Martin,

The issue about substituting a disk from a healthy pool isn't a good idea in the first place as it is putting too much stress on your disks.
While in theory removing a disk from a mirror and using the removed disk as is in order to recover data is doable, but you lack the redundancy. As the disk will be recognized to be the memeber of a mirrored pool, it will default to degraded state. To know your disk and data it contains are fine, you will have to run a scrub.

Then when you add the disk back to the pool, the pool will start resilvering (excess stress) and hopefully everything goes fine. However, the added disk will no longer be the same as it was inserted the moment before.

Overall, this approach is going to be prone to more issues you will have to resolve which may/won't be easy.

In your case, it would be more acceptable to use a local replication scheme and you can go both ways:

1) Powering system off:

When you are ready to swap your cold storage volume, you will need to power your system Off, then add or remove your cold storage volume (one disk is fine as long as it is a single drive volume, otherwise if you have the room, make it a mirror and add or remove the disk that are making mirror, this way your volume will always be healthy). You don't need to detach the volume. You can have more than one volume missing, Freenas will warn you about it, but that's about it. When you add the missing volume (Off state first and power the system), the pool will be recognized and made online once system is up and running. No need to mess with attache/detach process.
The issue is that you need to power your system off and on.

2) Attach/Detach

If your system support hot swapping of disk, then you can detach and/or attach volume when system is running. There are issues with this approach but is doable without taking your system down.

When dealing with encrypted drives, you need to reload the key and passphrase everytime you insert the pool, then when you detach the volume, you don't want to delete shares, replication tasks pointing to the backup volume.

Having to do any of the above on your main system increases the chance of breaking something on the main pool. Wrong replication and whatsnot.

As I have mentioned earlier, the safest and less stressful way to handle backup would be to have a dedicated backup Freenas system. You don't need all the bells and whisles you would from your main system.
All you would need is an inexpensive Motherboard and CPU, a LAN connection and 8GB of RAM (ECC if possible).


PS: I would only use replication for such backup strategy, the reason is that you could have 10TB drive, when replication is complete, you know the state of your backup. With Rsync and the like, the source and destination at the file level need to be evaluated.
Updating a backup made with replication could be a 10 minutes job as opposed to rsync which might take hours if successful.
 

mahaha

Cadet
Joined
Feb 3, 2020
Messages
8
Hi Apollo,

tbh I needed to go several times over your quite interesting and insightful post. If I still got something wrong, I hope you have the patience to help me out :)

Then when you add the disk back to the pool, the pool will start resilvering (excess stress) and hopefully everything goes fine.
That made my think a lot. It makes a lot sense, because the system must make sure, that every single file is in sync and thus, the disk has heavy I/O to handle.

The issue is that you need to power your system off and on.
If I got you right, you say, that I have to turn the system off, swap the drives, power it back on and the system will not recognize, that the one-disk-volume has its one disk changed, right?

There are issues with this approach but is doable without taking your system down.
What kind of issues and why? Is it something with ZFS? I would have expected that hotswap should not lead to issues, because - what would be the point to have such a technology then?

As I have mentioned earlier, the safest and less stressful way to handle backup would be to have a dedicated backup Freenas system. You don't need all the bells and whisles you would from your main system.
Okay, got your point. But then, the backup system would have to be located somewhere else to have the physical separation, correct? As I have made some investment in the new system, I'm not sure, if I could/should/would go for additional invests. You know, to keep up the peaceful life at home :)

I would only use replication for such backup strategy
Maybe, that's something I really don't understand: whats the difference between replication and what rsync does? After all, the system has to compare which files have changed, doesn't it?

So what are the options with my existing system, I'm asking myself? Reduce the data pool to three disks, add another "pool" with only one disk (is this still called a pool?) and do the rsync-job to this single disk? That sounds to be equally stressful. Also I would have to buy a fifth disk for this additional "pool" to have one disk physically separated at all times.

Honestly: I'm quite confused and kind of lost... Four disks in the data pool seem to be an unreasonable amount for me in a mirrored setup and this makes me think that there actually is no better setup than my current setup with an external USB drive and fortnightly backups. So the whole hotswap setup is useless.

Know what I mean?

Regards
Martin
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,449
That made my think a lot. It makes a lot sense, because the system must make sure, that every single file is in sync and thus, the disk has heavy I/O to handle.
Not just the one disk, all of them, however, resilvering will only take into account existing blocks. So if you pool is 10% full, then resilvering will be done on the 10%. This is a distinction from hardware based RAID solution which will go through the entire disks.

If I got you right, you say, that I have to turn the system off, swap the drives, power it back on and the system will not recognize, that the one-disk-volume has its one disk changed, right?
Turning the system off then removing the backup volume will allow you to remove the volume without having to detach it. At the same time, if you are planing on inserting the same backup pool, then you won't have to attach it.
What Freenas does is, upon startup, it will read its config file and try to match the disk that are installed. When the volume and disks are checked, Freenas will mount them automatically to make them available.
If they were detached ( not defined in the GUI) you will have to attach them and make sure you have the encryption file and passphrase.
When attaching a disk or pool when Freenas is already running, the contnet of the pool will be analyzed and it would take longer for the pool to be made available.
It is easier from the usability stand point, but it is an extra stress on the system due to power cycles. If you don't do it often, that would be less of a problem but if you do it regularly you may cause premature failures.

What kind of issues and why? Is it something with ZFS? I would have expected that hotswap should not lead to issues, because - what would be the point to have such a technology then?
See previous answer. But hot swapping has to be planed. If you detach one disk and remove the wrong one this is not good.

Okay, got your point. But then, the backup system would have to be located somewhere else to have the physical separation, correct? As I have made some investment in the new system, I'm not sure, if I could/should/would go for additional invests. You know, to keep up the peaceful life at home :)
Ideally, if you have a relative or friend who can host the backup system, that would be ideal if you don't want to keep cold storage, and the chance of having replication done often and as needed would be better.
However, nothing prevents you from having a small PC attached to the LAN you are on. PC could be next to the main Freenas box. As long as you handle proper cold storage cycles, you won't loose too much in case of a major issue.

Maybe, that's something I really don't understand: whats the difference between replication and what rsync does? After all, the system has to compare which files have changed, doesn't it?
When a volume is loaded, ZFS will monitor the status of the file system. When you use replication, ZFS will look at the latest snapshot and know if the file system has been modified when creating, moving or deleting files (it is a bit more complex than that) but any blocks is traceable.
So when you add a volume which comes from a previous replciation, ZFS will tell you if the filesystem has changed and whether it came from the same pool being replicated. It uses the snapshot as a mean of valudation.
In essence, ZFS doesn't care what is the status of the files before the last snapshot because it only cares about the difference from the existing snapshot on the backup and what was done after that.
When you look at the last snapshot ( per dataset) on the backup volume, if you can find the same snapshot on the main system, it means the data on both backup and main are the same at the time of the snapshot.

Rsync, on the other end doesn't work with ZFS. Instead, when you run rsync, and especially over LAN, rsync will have to be run on the main and the backup and compare files by performing CCR check or similar. Every file and every folder will have to be read through. Only when files appear modified or new, would rsync start updating the backup. So if you disk contains 1TB of data, rsync will have to read the 1TB on the backup, the 1TB on the main and transfer things over.
Also, I have always had trouble with rsync not being able to transfer files because of the too long folder/file length or simply due to different character coding.
When replication is complete and both last snapshots are present, I know for sure my backup is in a know state I can trust.
Rsync, not so much.

With replication, you could delete files and the files could be present in an earlier snapshot, so you could possibly recover the file. Rsync not so much as the file would have been deleted, assuming your backup doesn't have snapshots.

So what are the options with my existing system, I'm asking myself? Reduce the data pool to three disks, add another "pool" with only one disk (is this still called a pool?) and do the rsync-job to this single disk? That sounds to be equally stressful. Also I would have to buy a fifth disk for this additional "pool" to have one disk physically separated at all times.
As an analogy, a pool or volume could be thought of as a train. You have a locomotive and a few wagons. When cows look at the train while eating grass, they see only a train. They don't care if it has 1 wagon or 10 wagons.
So you could have 1 disk for a pool or any number. As long as all the disk present in that pool are accounted for, the system will be happy. If a disk is missing, either the pool goes degraded or becomes Offlined.

Honestly: I'm quite confused and kind of lost... Four disks in the data pool seem to be an unreasonable amount for me in a mirrored setup and this makes me think that there actually is no better setup than my current setup with an external USB drive and fortnightly backups. So the whole hotswap setup is useless.
4 mirrored disks for a pool would benefit with io operations and increased redundancy and in your case might be overkill.

Doing replication to a single disk is possible but you want redundancy at a minimum if this is the only backup disk. So this would be a bad practice.
If you have a few disk each one being a replicated version of the main, then you increase your chances of having your data being in a safe place.
Remember, data corruption will happen, either a disk fails entirely or a few sectors are corrupted., if you have a few disk, at least you can rely on those to recover lost data, if need.
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Hey Mahaha,

Here, I enforced the 3 copies rule as described in my signature. That way, everything is covered everything. The setup will survive whatever the first incident is. In theory, 2 incidents could take down my data, but they have to be extreme, perfectly sync in time and one of them is extremely low probability.

At the end, it is up to everyone to define his risk appetite and design the solution according to it.
 

mahaha

Cadet
Joined
Feb 3, 2020
Messages
8
If they were detached ( not defined in the GUI) you will have to attach them and make sure you have the encryption file and passphrase.
When attaching a disk or pool when Freenas is already running, the contnet of the pool will be analyzed and it would take longer for the pool to be made available.
What about this: power off the system, swap one disk and boot the system again? Is the identification done through the UUID? If so, this would not be an option of course.

As long as you handle proper cold storage cycles, you won't loose too much in case of a major issue.
Yes, and that was what I wanted to have kind of automized. I would have removed the disk on Monday and reinserted the disk in the evening that was the "cold storage" the week before.

Actually, I don't see much difference to a normal backup. The swapping method seems to me to have more disadvantages, because in a backup with rsync for example, I can do the incremental stuff.

I mean: whats your opinion? Snapshots are fine and all but they remain on the same system. Maybe I'm till struggling with ZFS and don't see its magic - I don't find a proper concept for me.

So when you add a volume which comes from a previous replciation, ZFS will tell you if the filesystem has changed and whether it came from the same pool being replicated. It uses the snapshot as a mean of valudation.
In essence, ZFS doesn't care what is the status of the files before the last snapshot because it only cares about the difference from the existing snapshot on the backup and what was done after that.
When you look at the last snapshot ( per dataset) on the backup volume, if you can find the same snapshot on the main system, it means the data on both backup and main are the same at the time of the snapshot.
Sorry, I'm not a native speaker: what is the essence in this paragraph? I see that you understood many concepts, but still I'm unable to adapt it to my situation.

When cows look at the train while eating grass, they see only a train. They don't care if it has 1 wagon or 10 wagons.
Love that analogy! :D

If you have a few disk each one being a replicated version of the main, then you increase your chances of having your data being in a safe place.
Remember, data corruption will happen, either a disk fails entirely or a few sectors are corrupted., if you have a few disk, at least you can rely on those to recover lost data, if need.
I'm thinking about this:
- reducing the main pool to three disks
- buy another disk
- create a new pool with only one disk ("backup")
- see how I can create replication on the backup volume
- somehow swapping the backup-disk on the volume every week

Isn't this the way "in between"?

Appreciate your time, Apollo, great contribution. Hopefully, my questions will also help others..

Regards
Martin
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,449
What about this: power off the system, swap one disk and boot the system again? Is the identification done through the UUID? If so, this would not be an option of course.
ZFS is most likely using UUID in order to recognize which pool belongs the disk.


Yes, and that was what I wanted to have kind of automized. I would have removed the disk on Monday and reinserted the disk in the evening that was the "cold storage" the week before.

Actually, I don't see much difference to a normal backup. The swapping method seems to me to have more disadvantages, because in a backup with rsync for example, I can do the incremental stuff.

I mean: whats your opinion? Snapshots are fine and all but they remain on the same system. Maybe I'm till struggling with ZFS and don't see its magic - I don't find a proper concept for me.
When you perform replication, the snapshots on the main volume are used to send the relevant blocks to the backup. On the backup, a new set of snapshots will be created with the same name as found on the main volume.
When you replicate from the main volume to another volume which already contains snapshots, the system will search for the last snapshot on the backup and see if it exist on the main volume. If it is the case, then replication will be incremental and only the newer snapshots will be replicated.

With rsync, comparison is at the system level and each files are being analyzed. This is not efficient.


Sorry, I'm not a native speaker: what is the essence in this paragraph? I see that you understood many concepts, but still I'm unable to adapt it to my situation.
"in essence": Essentially, basically.

I'm thinking about this:
- reducing the main pool to three disks
- buy another disk
- create a new pool with only one disk ("backup")
- see how I can create replication on the backup volume
- somehow swapping the backup-disk on the volume every week

Isn't this the way "in between"?
If you want to do cold storage with 2 pools (1 disk each), then you want give each pool a unique name, such as backup_1, backup_2...
Don't give the pool a long name as it will have repercussions on the maximum dataset/file name length limit.

If you really want to power cycle your system every time you want to swap the disk, it will be harder on your system. But doing so will allow you to swap backup_1 with backup_2 without having to make any changes to the system.
Freenas will give you warnings one of the disk is offline because it will not be connected. This isn't a problem.
If replication is automated, then Freenas will start doing incremental replication.

You will also need to create 2 distinct replication tasks, one task to replicate to backup_1 and another to replicate to backup_2.

A note of caution: I would create snapshots with a long enough lifespan (ie 6 month) so that if you were not able to swap the disks in weeks or month, you would still have a chance the last snapshots on your backup will still be present on the main system.
 

mahaha

Cadet
Joined
Feb 3, 2020
Messages
8
Hi,

Apollo, again: thank you for your patience.

After some experiments I think that I finally understand replication and the connection to snapshots. Took some time because Debian is my "home turf" and these concepts are new to me.

Now I've had a new idea which I wanted to discuss again, respectively ask you for your kind review and feedback:

- still 4 disks (can become 5 later, don't know yet)
- two pools: data and backup
- "data" consists of two mirrored disks
- "backup" consists of one disk at a time and for each of the remaining disks (lets name them "disk3" and "disk4") I create "the same pool" named "backup", but only one disk at a time is active in the system

Here comes the trick: the aforementioned pool "backup" is biweekly changed with disk3 and disk4 respectively and I configure a local replication task for "data" to "backup". Once a week, I export/disconnect "backup", re-insert the other disk and re-import the pool "backup".

Disadvantage: I have some manual work to do each time
Advantage: I have my disk with all my data stored somewhere else, the stress caused to the disks is as far as I understood you less than a full rsync and I leverage the full potential of replication.

WDYT?

Regards
Martin
 

mahaha

Cadet
Joined
Feb 3, 2020
Messages
8
The data on my main pool -- named 'tank' -- amounts to a little over 3TB in size. I've created a second pool -- named 'dozer' -- which is made up of a single 6TB disk. My data easily fits on a disk of this size, you may need to use a larger disk, depending on your requirements. I use replication to back up my main pool ('tank') to this single-disk pool ('dozer').


I set up an additional disk with exactly the same pool layout as the first 'dozer' disk. This lets me rotate these two disks between my safe and my NAS system -- using the manual steps I described above -- so that I always have an online backup of my primary pool along with an offline backup in my safe.

Oh dear, I just realized that my setup is exactly yours *facepalm - I didn't want to grab your credits!
 

Graey

Cadet
Joined
Aug 16, 2021
Messages
1
I'm considering a similar setup as Spearfoot and mahaha. However, I was wondering if it would work to have 2 mirrored disks at a time in the "backup" pool. The idea being I could have 3 back up drives total. Drives 1 and 2 are in a mirrored pool, 3 is offsite. Each week, I would rotate a drive out, so for example, Take drive 1 out of the backup pool and put drive 3 in. Now drives 3 and 2 would be in the backup pool and drive 1 would be offsite. In my head it makes sense because then I wouldn't need to make sure I'm inbetween backup processes when doing the swap, but maybe that's not right? Are there reasons this wouldn't work or are there disadvantages to this?
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Drives 1 and 2 are in a mirrored pool,

That would be only one single copy : both of them are subject to fail to the same physical threat (ex: fire) or the same logical threat (ex: intrusion). Also, if that mirror would be in the same server as your original, then it would not even count as a copy at all because it may very well fails to the same incident (fire ; intrusion) as the original.

Also, know that moving physical drives repeatedly is not a good thing. You increase risks of mechanical failure by a big factor. Because your offline drive is without any redundancy, it would be a pretty weak copy and would not offer great protection.

Your backup pool needs to be in a separate server. Once you have a separate server, putting it offsite will provide you with a solid copy No2 (your original always being copy No1). Then, you can use your 3rd drive to rsync a copy to it from your workstation / laptop and put it back on the tablet. That way, you reduce physical movement, you have 3 copies, one is offsite, one is offline, so you have a complete solution.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,600
I'm considering a similar setup as Spearfoot and mahaha. However, I was wondering if it would work to have 2 mirrored disks at a time in the "backup" pool. The idea being I could have 3 back up drives total. Drives 1 and 2 are in a mirrored pool, 3 is offsite. Each week, I would rotate a drive out, so for example, Take drive 1 out of the backup pool and put drive 3 in. Now drives 3 and 2 would be in the backup pool and drive 1 would be offsite. In my head it makes sense because then I wouldn't need to make sure I'm inbetween backup processes when doing the swap, but maybe that's not right? Are there reasons this wouldn't work or are there disadvantages to this?
@Heracles has a point about the drives being in the same server. Though removing the drives does represent a risk, I still do it. Because having a remote location with a backup NAS, AND having reasonable network speed between them, can be problematic.


One way to look at each of your 3 backup disks, is that that are a potential to go back in time. If you only have 2 points, (the mirror and the single disk off site), then you can only go back in time twice.

If, on the other hand, you use each backup disk independently, you can go back in time 3 times.

Plus, if the backup disks are not full, you can use snapshots on them to have even older backups. That is what I do. When I start a new backup, I check the available space, and if it seems tight, I delete the oldest snapshot(s) on the backup disk until I have enough space. When my backup is done, I snapshot it again for next time. This actually allows me to go back in time years, (because my usage requirements are modest, but my backup drives are large).
 
Top