Offsite Backup via weekly Hot-Swapping

Status
Not open for further replies.

Chiaki

Explorer
Joined
Apr 4, 2016
Messages
51
Hello everyone!
So I'm trying to make Hot-Swap work for me: https://forums.freenas.org/index.ph...ap-degrades-another-volume.44469/#post-297822

Aside from what you can read in the former thread, Hot-Swapping seems to work well as long as I don't use static HDDs as a spare.
So what I am trying to do is this:

I want to be able to put a big HDD into the Hot-Swap at time A and have a full backup of a specific zpool on it by time B. A could be morning and B evening for example.
The backup should be easily reimportable if shit hits the fan, so maybe a mirrored zpool is the way to go.

Sadly I don't know how to dynamically let FreeNAS mirror a zpool onto another drive. Especially when this drive is plugged in and out while the server is running. (I'm not even sure how to do it manually via the WebGUI, aside from having this in an automated way, which is the goal.)

Do you have any idea how I could accomplish this use-case? Plug in a drive while server is running, have it backup the current state, then plug the drive out and put it somewhere safe.

Thanks in advance for reading and your recommendations!
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
Your hardware specs would be helpful here.

I think I understand what you are trying to do: you have a large pool, with a bunch of data on it. You want to make a copy of all of this data to a separate super-large hard drive, and then be able to remove it.

The easiest thing to do, in my mind, would be to plug in the drive, import it as a separate pool, and then replicate the main dataset to this new drive. I'm not sure how user friendly this is, but I can't imagine that it would be too difficult to do, especially if you are doing it regularly.

If you are manually plugging in and removing this drive, I would not recommend an automated solution. You could put all the commands in a shell file, for example, but I'd still recommend manually running it after you import the drive.
 

Chiaki

Explorer
Joined
Apr 4, 2016
Messages
51
Hi Nick and thank you for the swift and helpful reply!

You can find my spec by reading this: https://forums.freenas.org/index.ph...ap-degrades-another-volume.44469/#post-298239

EDIT: Do you know a detailed step-by-step guide on how to do this replication you speak of? I have a snapshot task running but it seems I can't do a replication manually but only regularly?
Also: Would it be possible to script this somehow? Such as: "Check if drive is there, if yes mount it and perform replication, send an email if done." ?
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633

I hate to sound incredulous, but really? Motherboard, CPU, memory? Drive configuration? At a quick glance, that post gives me nothing of your hardware configuration, and if I need to read through the prose to figure out what you're doing, it frankly isn't worth it to me to do it. I'd rather spend my time here figuring out a solution to your problems than trying to divine how your FreeNAS server is set up.

Do you know a detailed step-by-step guide on how to do this replication you speak of? I have a snapshot task running but it seems I can't do a replication manually but only regularly?

There is no GUI way to do manual replication. This post contains the commands to do it quite easily: https://forums.freenas.org/index.php?threads/how-to-copy-one-pool-into-another.16653/#post-86083

Also: Would it be possible to script this somehow? Such as: "Check if drive is there, if yes mount it and perform replication, send an email if done." ?

You probably could do it, but again, if you are manually installing and pulling the drive, I wouldn't think it would be that much more hassle to run a command or two.

Part of the problem with doing it this way is identifying the new drive properly. Scripts that do what you want, without proper design or testing, tend to break in ways that destroy data, which is obviously not what we want here.

Assuming you are confident enough that the pool names are static, and won't change, I would write a script that does the following, and it should only take three lines:
  1. Creates snapshot on main pool
  2. Replicates to secondary pool
  3. Sends email
If you want to get fancy, it wouldn't be too difficult to add some logic that would populate the email with details from the replication and tell you if it's a success or failure. Again, I would not attempt to automate the detection of the new drive, but I would do the import of the new drive from the web GUI, and once I've confirmed that it's imported successfully, would I run my script.

The following documentation should provide all you need to do the snapshots and replications:
 

Chiaki

Explorer
Joined
Apr 4, 2016
Messages
51
Hi Nick and thanks a lot!

Sorry, I didn't mean to offend you. The link I gave you was meant to guide you to the second line of the post that the link was tied to, which contained a link to the thread of my hardware config. (I just wanted to provide the extra info of the meta-post with it.)

I will write down everything that comes to mind about my current FreeNAS server in the following lines and hope this information is what you were after, so you don't need to click around:

The hardware:
  • CPU: Intel Xeon E5-2620v4
  • Mainboard: SuperMicro X10SRi-F
  • RAM: 4x Crucial 16GB DDR4-2133 CL15 ECC RDIMM CT16G4RFD4213
  • HDD: 5 x WD Red 3TB (WD30EFRX) + 1 8TB Backup-HDD (Toshiba X300)
  • Chassis: Corsair Obsidian 650D (used, in good condition)
  • SSD: Samsung 850 Evo 250GB
  • Hot-Swap: 5.25" to 3.5" SATA hot-swap slot
  • UPS: APC Back-UPS Pro BR900G-GR
  • PSU: Platimax 500W Enermax (80plus Platinum)
The setup:
FreeNAS on bare metal, the 5 WD Reds are running an encrypted RAIDZ2 zpool. The Toshiba is the hotswap HDD. The zpool is used for CIFS-shares and is a residence of 2 virtual machines (per bhyve).

Please tell me if you need any more information concerning the hardware.

You helped me out a lot by pointing me to the replication commands and telling me that there's no GUI way to do this. Thank you very much!

EDIT/REQUEST: How would I go about this use-case instead:
  1. Put huge drive in the hotswap bay
  2. Mirror zpool onto huge drive along with all snapshots but WITHOUT regular deletion of old snapshots
  3. Receive a mail as soon as the huge drive is full so I can replace it with another one
  4. Label the full drive with the date and put it somewhere safe
  5. Have zpool mirrored on new huge drive but with snapshots from this point on
Maybe this would be a better way!
 
Last edited:

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
Sorry, I didn't mean to offend you.

No offense taken!

Thank you for posting your hardware. That basically addresses any concerns I might have. Usually, when we see weird requests, it comes with really poor hardware choices, which could all lead to much bigger problems down the road.

Mirror zpool onto huge drive

I probably wouldn't use mirroring, largely because you'd limit the space on your main pool.

The way it works in ZFS is you have one or more vdev to a pool, and one or more drive to a vdev. For example, you'd have one RAIDZ2 vdev with 5x 3TB drives (lets call it vdev1), and one vdev with one 8TB drive (let's call it vdev2). If you mirrored vdev1 and vdev2, you'd only get 8TB of space.
 

Chiaki

Explorer
Joined
Apr 4, 2016
Messages
51
If you mirrored vdev1 and vdev2, you'd only get 8TB of space.
Well that would be bad indeed.
Do you see a way to fulfill this use-case then without limiting the vdev1 useable size? I heard something about replicating snapshots to another vdev... would this work? Could I use the GUI for this?
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
I heard something about replicating snapshots to another vdev

This is exactly the approach I outlined above :rolleyes:. There is no GUI for one-time replication; the only GUI is for periodic replication.
 

Chiaki

Explorer
Joined
Apr 4, 2016
Messages
51
This is exactly the approach I outlined above :rolleyes:. There is no GUI for one-time replication; the only GUI is for periodic replication.
Maybe I am misinterpreting the term "replication" but how would a one-time replication help me with keeping the large drive in the server until it is full with data and snapshots?

Shouldn't it be some kind of "continuous" replication?
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
Maybe I am misinterpreting the term "replication" but how would a one-time replication help me with keeping the large drive in the server until it is full with data and snapshots?

Shouldn't it be some kind of "continuous" replication?

A replication is a point-in-time event. There is no such thing as a "continuous" replication. The only "continuous" setup would be mirroring, but that will not accomplish what I think you are trying to accomplish: for starters, it will limit the size of your pool, and secondly, it won't be a series of backups, but rather an exact copy of your pool at the current time, which means it would never get full until your pool gets full.

Which brings me to the following: what would be the purpose, in your mind, of "continuous" replication? What are you trying to protect against?

Usually, continuous backup is used in enterprise settings where data changes constantly. Also, continuous backup is almost never used locally, and instead used to send data to a remote source. This is usually a very expensive setup, and is only warranted if the value of your changed data is high: for example, a bank's electronic ledger. Furthermore, the value of this backup is rarely continuous recovery (being able to restore to any point in time), but rather zero-data-loss recovery (so the only copy that matters is the most recent one).

By performing a continuous backup locally, you have no protection from local failures, like power surges, viruses, hardware failure, etc, which are the usual suspects in needing continuous backup in the first place.

If all you are after is periodic backups, then the replication solution I outlined above will work perfectly. You could set a cron to run it weekly, daily, hourly, or however frequently you need it to run.

Also, in the replication sense, there is no distinction between data and snapshots. The snapshot is the data. By doing multiple snapshots/replications, you are copying multiple snapshots to the new pool, where ZFS automagically reconciles the snapshots, so the only data the new pool needs is the changed data.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
You won't "mirror" the pool to the spare drive, but rather "replicate" it. Each time you perform the replication you'll send the delta from the last replication (snapshot) to the latest snapshot, which would be just before you started the replication.

That would be a good backup.

Eventually, your disk(s) would fill, and you could start purging the oldest snapshots.
 

DrJam

Cadet
Joined
Sep 12, 2016
Messages
1
am watching this also.
i have spent WEEKS googleing for info on this.

it seems the only good way to use freenas for hotswap backups is to use microsoft instead, and crashplan or cloudberry.
this saddens me.....but i will keep googleing.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Hotswap works fine, assuming the underlying hardware supports it (not all enclosures support hotswapping). And replication between 2 datasets is very straightforward to setup. It's the constant setup and teardown of the target dataset that I see as the biggest issue. And that is likely not going to get a lot of attention. If you are going to do this yourself, I would suggest looking into scripting it -- use the CLI to import the target pool, then use something like znapzend or zrep to handle the replication, and then export the target pool. The problem will be when your source dataset becomes larger than the destination dataset.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
I use hot-swap for my monthly backups. Has worked perfectly for the last 10 months, now that I have
both instructions and a script to perform the backup. I even use ZFS for the backup drive so I can get
some verification that the data is backed up correctly. With my RAID set being 4 x 4TB RAID-Z2, it
will fit completely on a single 8TB disk.

BUT, it's all CLI work. ALL of it. Making it fully automated was not my goal and I have no need to
make it so. Further, my hardware setup is different as I use an eSATA enclosure for the backup drive.

Last, while I can publish the instructions and script, I hesitate to do so because I will not be supporting
them. Meaning, if you can't modify them for your use, you really should not be using them.

All that said, I was planning on starting a thread on how to perform local disk backups. Just don't have
the details complete yet.
 
Last edited:
Status
Not open for further replies.
Top