Manually (on-demand) replicating entire main pool to external USB drive (for physical offsite emergency)

Joined
Oct 22, 2019
Messages
3,589
Disclaimer: I use bold text to highlight important points for visitors who quickly scroll through forum posts. I don't mean for it to appear as if I am yelling at the reader.

Currently, as of FreeNAS 11.2 U6, there exists no means to manually replicate your entire main pool to an external USB drive using only the official web interface buttons and menus; let alone having to create a redundant Periodic Snapshot Task. This would be useful in dealing with an emergency situation in the event you cannot access your main pool anymore (corruption, loss of disks, lost encryption keys, etc), yet you cannot afford nor have the capability or time to purchase, build, and configure a separate FreeNAS system for routine remote replication tasks.

Until such a feature exists in the GUI, here is the method I am using, which comes very close to achieving the same level of convenience. If anyone sees anything wrong with this method (even a potential risk), please let me know! I have tested it several times and it appears to work, *yet it lacks any status or progress indicator, so I am left to "guess" when the task is complete. I can also check my email, since the task is configured to email me when it finishes. *(You can always see if it's currently running by clicking on the Task Manager icon in the upper-right.)

Here are the steps I took, including my initial replication, followed by the script I created under Tasks > Cron Jobs.



---



One-Time Initial Replication to USB Drive

Before any scripts or incremental backups, I need to replicate my entire main pool to the USB drive. This is done only once. All subsequent backups will be done with the "manual, on-demand" cron job explained later in this forum post.

1) First I make sure no other tasks or data transfers are taking place. This initial backup can take a few hours or more, depending on how much data is involved.

2) In the main menu of the FreeNAS web interface, I click on Shell.

Next I do the following commands in order.

3) Physically plug in the prepared USB drive, assuming it has been initialized with a fresh empty pool named usbdrivepool of equal or greater capacity as your main pool.

4) Import the pool, usbdrivepool, under Storage > Pools > Add.

5) Create a snapshot of the entire main pool, recursively, including all child datasets.
zfs snapshot -r mainpool@offsite-backup-new


6) Backup this entire snapshot (mainpool@offsite-backup-new) of the main pool created just now, to the USB drive containing the fresh empty pool named usbdrivepool.
zfs send -R mainpool@offsite-backup-new | zfs recv -vFd usbdrivepool

This can take a very long time, so make sure to keep the Shell window open to monitor its progress.



---



Creating the Task

Now that the one-time initial backup is done, the following is for recurrent, on-demand manual incremental backups, which should yield a perfect clone of your main pool to the external USB drive. It will transfer only the changes since the previous backup.

1) First, I create a cron job under Tasks > Cron Jobs > Add, and configure it in such a way that it will never run on its own: it can only be run on demand, manually.

Under Schedule a Cron Job, I select the drop-down menu entry Custom. I then select Monthly under Presets. I then set Minutes to 0, Hours to 0, and Days to 1. I then select only Jan under Months. I click Done. (This simply is used as an arbitrary place-holder, since all tasks require a date. January 1 at midnight is nothing special.) The task will never run on its own, since I will next "disable" it.

2) I uncheck Enabled.

3) I uncheck Hide Standard Error.

4) I uncheck Hide Standard Output.

5) Under Run as User I select root from the drop-down menu.

6) Under Description I write: Replicate entire main pool to external USB drive for offsite safe-keeping

7) Under Command I write out the following several commands (in a single line), which will be run in sequential order (separated by semicolons), and finally emailed to me when it completes. I will explain what I assume it does. Yes, the parentheses are intentional, as it runs in a subshell prior to emailing me.

( zfs rename -r mainpool@offsite-backup-new mainpool@offsite-backup-old; zfs snapshot -r mainpool@offsite-backup-new; zfs send -RI mainpool@offsite-backup-old mainpool@offsite-backup-new | zfs recv -vFd usbdrivepool; zfs destroy -r mainpool@offsite-backup-old; zfs destroy -r usbdrivepool@offsite-backup-old; ) | mail -s "FreeNAS Replication to USB Drive" "myemail@example.com"



---



Manually Running the Backup to the USB Drive

Whenever I wish to clone my main pool to my emergency backup USB drive, I do the following.

1) Physically plug in the USB drive.

2) Import usbdrivepool through Storage > Pools > Add.

3) Go to Tasks > Cron Jobs > click the three-dot menu drop-down to the right of the "Replicate entire main pool to external USB drive for offsite safe-keeping" task > click on Run Now.

4) Check my email until I see a confirmation that the task has completed. (Or keep checking with Task Manager, which will perpetually display 20% completion until the task is truly finished.)

5) Optionally, do any verifications to make sure everything went smoothly.

6) Disconnect usbdrivepool through Storage > Pools > click the cogwheel icon next to its name > Export / Disconnect.

7) Physically unplug the USB drive.

8) Place the USB drive somewhere safe, offsite. It can be retrieved in case of an emergency, or when another manual incremental backup needs to be created, or stored in secure briefcase for security and archive purposes.



---



What the Manual On-Demand Cron Job Supposedly Does

The cron job does the following in the specific order.


Rename the current all-inclusive snapshot ("-new"), used for the previous backup, to an "-old" snapshot, to prepare it for the next incremental backup.
zfs rename -r mainpool@offsite-backup-new mainpool@offsite-backup-old


Make a new all-inclusive snapshot ("-new"), which will next be used to transfer only the changes since the previous all-inclusive snapshot ("-old").
zfs snapshot -r mainpool@offsite-backup-new


Replicate the main pool (mainpool) to the USB pool (usbdrivepool), using an incremental backup of the changes from the -old to the -new snapshot.
zfs send -RI mainpool@offsite-backup-old mainpool@offsite-backup-new | zfs recv -vFd usbdrivepool


Delete the "-old" all-inclusive snapshots, which are no longer needed anymore. (The "-new" all-inclusive snapshots will be used for the next incremental backup, obviously being renamed to "-old", since a fresh all-inclusive "-new" snapshot is made immediately before the next incremental backup.)
zfs destroy -r mainpool@offsite-backup-old; zfs destroy -r usbdrivepool@offsite-backup-old


Everything within the subshell on the cron job is then emailed with the subject "FreeNAS Replication to USB Drive". Hopefully, any errors will be printed in the email as well.
mail -s "FreeNAS Replication to USB Drive" "myemail@example.com"


---



Questions, Comments, Criticisms?

I need to know if anything is wrong, risky, or could be done better. I don't want myself or anyone to risk any data loss if anything I wrote could possibly cause irreversible damage!


---


UPDATE June 17, 2020:

  • I cleaned up the formatting to more easily follow through.
  • This is meant as a means to fill in a user's need to make a quick and simple "entire pool" backup to an easily accessible USB drive. Many of us don't have the money, time, nor convenience of a second location to configure a dedicated and fully automated server-to-server replication task. I believe that something is better than nothing. I have used this method many times and it has worked exactly as expected. This is basically the equivalent of copying everything to a USB drive, while retaining the ZFS pool/dataset structure.
  • There still exists no means of achieving this from the GUI menus, as it still requires the creation of a Periodic Snapshot Task, which is hard to overlay for something done manually (i.e., plugging and unplugging a USB drive.)
 
Last edited:

stevecyb

Dabbler
Joined
Oct 17, 2019
Messages
30
Question: Will this work with a smaller drive--provided the USB drive is large enough to hold all your data? Or does ZFS impose the requirement that the backup drive be as big as the pool even if there's not much data in the pool?

Comment (with an implicit question): It looks like this can probably be adapted for rotating backups, as follows. Do as you describe for disk 1, take it off site, then later do the same thing for disk 2 (using a different name for the full snapshot). Bring disk 1 back, and do an incremental snapshot using the first snapshot, then take disk 1 offline and take it offsite. Later, bring disk 2 back, and repeat the process on its snapshot. Is this correct, or am I missing something that would preclude doing that?

Another comment: I don't see any reason this couldn't be adapted to do the update backups automatically (for people who might want to do it that way), then, when you think you've done enough stuff since you've taken a backup offsite, do the swap as described above (adjust the script to utilize the "other" snapshot, or better yet have it detect which drive is plugged in).
 
Joined
Oct 22, 2019
Messages
3,589
Question: Will this work with a smaller drive--provided the USB drive is large enough to hold all your data? Or does ZFS impose the requirement that the backup drive be as big as the pool even if there's not much data in the pool?

Good question. I believe the destination pool must allow the same capacity or greater? I think there are ways around this, but hopefully someone with more first-hand experience can chime in.

Comment (with an implicit question): It looks like this can probably be adapted for rotating backups, as follows. Do as you describe for disk 1, take it off site, then later do the same thing for disk 2 (using a different name for the full snapshot). Bring disk 1 back, and do an incremental snapshot using the first snapshot, then take disk 1 offline and take it offsite. Later, bring disk 2 back, and repeat the process on its snapshot. Is this correct, or am I missing something that would preclude doing that?

Makes sense to me! Perhaps two "manual" jobs with slightly different names to discern them from each other, possibly using the color of the USB enclosure:
Code:
@offsite-backup-red-new
@offsite-backup-red-old
@offsite-backup-blue-new
@offsite-backup-blue-old


Another comment: I don't see any reason this couldn't be adapted to do the update backups automatically (for people who might want to do it that way), then, when you think you've done enough stuff since you've taken a backup offsite, do the swap as described above (adjust the script to utilize the "other" snapshot, or better yet have it detect which drive is plugged in).

That would work in theory, the only concern I have is that there is no graphical progress indicator of such an automatic cron job, so you would have to dive in further in the shell to make sure the replication (to the currently connected USB drive), is not still in progress. Since you are not manually running it on-demand, you might forget the schedule and/or don't have a time-frame when to check your email for when the task is likely to be finished. Otherwise, it should work fine! Another issue, that might not be such a big deal, is how some external USB drives, such as Seagate and Western Digital, have firmware that forces them to sleep when they are idle for a period of time, regardless of any power settings issued by the OS (Windows, Linux, Unix, etc). The last time I spoke with a WD agent over the phone, he hold me there is no way to override the USB drive's auto-sleep settings.

You can see why I wish there existed in the official FreeNAS web interface a tool to manually replicate an entire pool to a locally plugged in USB drive. "Replication to another remote FreeNAS server" is not feasible for many users due to money, time, and every-day life.
 

stevecyb

Dabbler
Joined
Oct 17, 2019
Messages
30
Good question. I believe the destination pool must allow the same capacity or greater? I
You can see why I wish there existed in the official FreeNAS web interface a tool to manually replicate an entire pool to a locally plugged in USB drive. "Replication to another remote FreeNAS server" is not feasible for many users due to money, time, and every-day life.

Indeed.

All my USB drives are black, so I'd have to come up with a different labelling scheme. :)

I was thinking of the automated backups being only incremental ones and taking place once a week, supplemented by any on-demand call you might make. ("Gee I did a lot of work last night, let me back it up right now and offsite it.") You wouldn't necessarily take the disk offsite every time, but wait until you thought a significant amount of changes had happened to your data.
 

stevecyb

Dabbler
Joined
Oct 17, 2019
Messages
30
Actually I've played around a bit, and there might be a simpler solution.

Create a periodic snapshot task for the source data set. Then disable it, or alternatively, set it to happen once a year. (I am not sure you can set up replication for a disabled snapshot task. If you can't, then set it to happen once a year. Or just leave it as is...I think that will work.

Set up replication, source to destination.

Now when you want to copy stuff over or update it, do a manual snapshot on the source. The system will kick in and do the replication automatically; first time it will copy everything, subsequent times, just update. (I am making an assumption here...that your destination disk can be absent for several snapshots, but when present ALL of those "missing" snapshots will be replicated.)

When you're ready to take your external disk offsite, export it, and disable the replication to that disk. When you bring it back, import it and turn on the replication task.

If you want to do two disks as rotating backups, give the pools on each different names (e.g., traveler1 and traveler2). Create separate replication tasks for each but be sure to disable any task for the disk(s) not plugged in.

No need for cron files or complicated renaming schemes.

NOTE: I have not verified the rotating backups aspect of this yet. But from what I have seen so far, it ought to work.
 

blueether

Patron
Joined
Aug 6, 2018
Messages
259
Why have different names on each offsite disk pool? I think it would work with two disks with the same pool name...?
 

stevecyb

Dabbler
Joined
Oct 17, 2019
Messages
30
Why have different names on each offsite disk pool? I think it would work with two disks with the same pool name...?


I don't know.

I do know I did a replication to a USB drive, but the icon at the top indicating it was "replicating" never stopped moving side to side even though the replication tasks area indicated everything was up to date.

But my other system wouldn't load the drive due to insufficient replications. So something went sideways on me, I don't know what.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,600
I wrote a resource about using locally attached disks for backups, which includes the script I am using. It's all shell command driven, which I am most familar. Here is a link to the resource;

How to: Backup to local disks

One note, the backup disk only has to be as large as the data to be stored, (plus the 20% free space after copy). So a source pool of 8TB, where only 3.5TB used, can be backed up to a 4TB disk.

In my case, I made the backup disk keep multiple snapshots of older backups. Thus, I can have a bit of history. Whence the backup disk gets too full, (>85%), I'll delete the oldest snaphots until it drops down below 80%. Since I planned on another large disk for my backup rotation, I will probably buy a 10TB disk or larger. That would let me keep more history.
 

stevecyb

Dabbler
Joined
Oct 17, 2019
Messages
30
I wrote a resource about using locally attached disks for backups, which includes the script I am using. It's all shell command driven, which I am most familar. Here is a link to the resource;

How to: Backup to local disks

One note, the backup disk only has to be as large as the data to be stored, (plus the 20% free space after copy). So a source pool of 8TB, where only 3.5TB used, can be backed up to a 4TB disk.

In my case, I made the backup disk keep multiple snapshots of older backups. Thus, I can have a bit of history. Whence the backup disk gets too full, (>85%), I'll delete the oldest snaphots until it drops down below 80%. Since I planned on another large disk for my backup rotation, I will probably buy a 10TB disk or larger. That would let me keep more history.

Thank you!

Hope the weather is nicer in Rivendell than it is on non-Middle Earth right now.

Right now I'm running into an issue where, if I create the pool on the NAS, the Linux box running ZFS can't even see the pool with zpool and zfs commands (though it shows in gparted as filesystem type "zfs"), and if I create the pool on the Linux box then move the disk to the NAS (a Mini XL+), the NAS GUI doesn't see the pool on the disk (though it at least CAN see there's a disk there). As part of the point to the external backup is to guard against the possibility of the NAS box itself crashing, being destroyed or stolen, etc., I *really* want other systems to be able to read the backup! (It took three weeks to receive the Mini XL+ after I ordered it.)

EDIT: Maybe all I need to do is that "camcontrol rescan all." I'll give it a shot tonight.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,600
@stevecyb, Yes, the weather is decent, (helps to have one the great rings of power :smile:.

On a more serious note:
Yes, you can create a backup disk that will work with both ZFS on FreeBSD / NAS & ZFS on Linux. My backup disks were created to be that way.

Basically you create a list of common ZFS features. The ZFS on Linux manual page for zpool-features has a list of what ZoL supports. I don't have my FreeNAS available at the moment to check if it also has a manual page for zpool-features. You then make the list of common features between them

You then use a command similar to this, (probably on either OS);
Code:
zpool create -d \
  -o comment="Backup pool created on `date +%Y%m%d`" \
  -o feature@async_destroy=enabled \
  -o feature@lz4_compress=enabled \
  ...
POOL_NAME POOL_DEVICE

That should do it.
 

Adrian

Contributor
Joined
Jun 29, 2011
Messages
166
man zpool-features works for FreeNAS

It is quite lengthy.

This summary is for a recently rebuilt 11.2-U6 system, with all features available under FreeNAS enabled.
Code:
root@freenas:~ # zpool upgrade
This system supports ZFS pool feature flags.

All pools are formatted using feature flags.

Every feature flags pool has all supported features enabled.
root@freenas:~ # zpool upgrade -v
This system supports ZFS pool feature flags.

The following features are supported:

FEAT DESCRIPTION
-------------------------------------------------------------
async_destroy                         (read-only compatible)
     Destroy filesystems asynchronously.
empty_bpobj                           (read-only compatible)
     Snapshots use less space.
lz4_compress
     LZ4 compression algorithm support.
multi_vdev_crash_dump
     Crash dumps to multiple vdev pools.
spacemap_histogram                    (read-only compatible)
     Spacemaps maintain space histograms.
enabled_txg                           (read-only compatible)
     Record txg at which a feature is enabled
hole_birth
     Retain hole birth txg for more precise zfs send
extensible_dataset
     Enhanced dataset functionality, used by other features.
embedded_data
     Blocks which compress very well use even less space.
bookmarks                             (read-only compatible)
     "zfs bookmark" command
filesystem_limits                     (read-only compatible)
     Filesystem and snapshot limits.
large_blocks
     Support for blocks larger than 128KB.
sha512
     SHA-512/256 hash algorithm.
skein
     Skein hash algorithm.
device_removal
     Top-level vdevs can be removed, reducing logical pool size.
obsolete_counts                       (read-only compatible)
     Reduce memory used by removed devices when their blocks are freed or remapped.
zpool_checkpoint                      (read-only compatible)
     Pool state can be checkpointed, allowing rewind later.

The following legacy versions are also supported:

VER  DESCRIPTION
---  --------------------------------------------------------
 1   Initial ZFS version
 2   Ditto blocks (replicated metadata)
 3   Hot spares and double parity RAID-Z
 4   zpool history
 5   Compression using the gzip algorithm
 6   bootfs pool property
 7   Separate intent log devices
 8   Delegated administration
 9   refquota and refreservation properties
 10  Cache devices
 11  Improved scrub performance
 12  Snapshot properties
 13  snapused property
 14  passthrough-x aclinherit
 15  user/group space accounting
 16  stmf property support
 17  Triple-parity RAID-Z
 18  Snapshot user holds
 19  Log device removal
 20  Compression using zle (zero-length encoding)
 21  Deduplication
 22  Received properties
 23  Slim ZIL
 24  System attributes
 25  Improved scrub stats
 26  Improved snapshot deletion performance
 27  Improved snapshot creation performance
 28  Multiple vdev replacements
 

stevecyb

Dabbler
Joined
Oct 17, 2019
Messages
30
Well, comparing the feature list Adrian quoted with what my Linux man page gives:

FreeNAS has "device_removal", "obsolete_counts", and "zpool_checkpoint" while Linux doesn't.

Linux has "large_dnode", "edonr", and "userobj_accounting" while FreeNAS doesn't.

The list of things they have in common is far larger, so if I were to do as Arwen suggests, I'd probably want to give a list of what I am disabling.

Apparently the FreeNAS extras don't impede Linux from reading the dataset., but a Linux created database won't even show as a dataset when you go to import it on FreeNAS.

That is of course the acceptable direction for my purposes (ensuring a backup is readable on something other than FreeNAS).
 
Joined
Oct 22, 2019
Messages
3,589
June 17, 2020: Cleaned up and added some updates.
 

Jip-Hop

Contributor
Joined
Apr 13, 2021
Messages
112
A 'manual' offsite backup to an external drive is also part of my strategy. Although I have automated it as much as I can. I only need to plugin the backup disk. The snapshots and offsite backup are made automatically and I receive an email notification when it's safe to unplug the drive. I published my offsite backup approach on GitHub. It's made specifically for TrueNAS SCALE, but I think it could work for TrueNAS CORE too. Please let me know if you decide to try it out :)
 
Top