SOLVED Backing up FreeNAS to another FreeNAS

Status
Not open for further replies.

SilverJS

Patron
Joined
Jun 28, 2011
Messages
255
Gents,

Disclaimer : I did search for this. Came up with a thread from 2011, that seemed to me to peter out a bit early and assumed more knowledge that I have.

So - I have this FreeNAS box that has been wonderful, had it since 2011. I have also since bought 3 X 3 TB hard drives that I had in a Windows machine as a single spanned volume, to act as a back-up. I know all here will find this rather appalling - and I agreed, so this weekend, I dug out some older hardware, bought myself a reasonable case and started out to build a decent back-up server. Specs on it, for those interested :

MSI 785GM-P45 Mobo, AMD Phenom X3 720 processor, 8 GB RAM, those same 3 X 3 TB drives (Hitachi 5K3000, as I recall). Seasonic SS-300ET power supply. And yes, Cyberjock, I DID read the Hardware Recommendations thread you wrote - I stand warned about AMD hardware. =) (My other box has also been running on AMD hardware all this time.) It's running 9.3, and by the way, please allow me this opportunity to congratulate those who set up the wizard - it works a charm.

So anyway - now that I have this thing running, what do you guys reckon is the best way to back my server up to it? The other thread was saying RSync over SSH, but is this really necessary given that both are local? Is the best way the Replication I read about in the documentation?

My ideal solution would be something that automates back-ups - say, every 24 hours or so. I'm sure this is possible, right?

Cheers!
 
  • Like
Reactions: Oko

SilverJS

Patron
Joined
Jun 28, 2011
Messages
255
Well, my six-week-old couldn't sleep, so now, neither can I. =)

So, thanks for reply, it was indeed what I thought but just wanted to make sure in case I was missing something. In any case, got it done. Followed documentation, and props all around to those who wrote that. (My PUSH box is 9.2, so a few very minor differences in dialog boxes between documentation and what I saw, but I don't think any of it is a big deal, and I'm sure things are spot-on for 9.3.)

Not too sure how I'll actually be able to confirm that it worked (I'm not fully sure of the Remote Dataset path yet...), but thanks for help!
 

rogerh

Guru
Joined
Apr 18, 2014
Messages
1,111
One thing you don't want to do is to recursively replicate the whole pool to the root of the pool on the other box. It is best to make a new dataset on the receiving system and replicate into that. One reason is that the receiving system will have at least one dataset with the same name as on the sending box (.system) and this will be partially overwritten.
 

SilverJS

Patron
Joined
Jun 28, 2011
Messages
255
OK, so - got two e-mails so far from my main box, with the subject : "Replication Failed!". So I guess we know how well that went. =)

Anyway, I'll start following the Troubleshooting Replication steps in the documentation. Again, from the little I've perused, fantastic work to all involved.

EDIT : Tried the first step, which was to troubleshoot SSH. I was getting a password request, which is a no-no. I re-pasted the key - I had edited out the "ssh rca" or some such at the beginning, and the "Key for Replication" bit at the end - just pasted it as-is this time, it worked. But I was getting the same messages as below in the console still.
 
Last edited:

SilverJS

Patron
Joined
Jun 28, 2011
Messages
255
Need a bit of help with the troubleshooting steps, please. In the sample command :

zfs send local/data@auto-20110922.1753-2h | ssh -i /data/ssh/replication 192.168.2.6 zfs receive local/data@auto-20110922.1753-2h

What does the local/data mean? In the preamble, the docs say that the dataset for replication on the PULL station is called remote, but nowhere is remote to be found in the sample command. I tried replacing, on the second part, the local/data with Backups_Pool/Replication (the path to the dataset I created for replication), and to replace local/data with Raidz2 on the first portion. Then the error was "could not create .ssh directory" or some such.
 

ser_rhaegar

Patron
Joined
Feb 2, 2014
Messages
358
Need a bit of help with the troubleshooting steps, please. In the sample command :

zfs send local/data@auto-20110922.1753-2h | ssh -i /data/ssh/replication 192.168.2.6 zfs receive local/data@auto-20110922.1753-2h

What does the local/data mean? In the preamble, the docs say that the dataset for replication on the PULL station is called remote, but nowhere is remote to be found in the sample command. I tried replacing, on the second part, the local/data with Backups_Pool/Replication (the path to the dataset I created for replication), and to replace local/data with Raidz2 on the first portion. Then the error was "could not create .ssh directory" or some such.

local/data@auto-20110922.1753-2h is the complete name of your snapshot being sent over. local/data refers to the dataset, then the stuff after the @ is the snapshot name for that dataset.
 

SilverJS

Patron
Joined
Jun 28, 2011
Messages
255
That's what I figured, thanks. I still had an error though (assuming my syntax was all correct - when I get back home, I'll paste my exact string so you guys can vet it), but I'm not sure what it was.

More to follow.
 

SilverJS

Patron
Joined
Jun 28, 2011
Messages
255
OK, so things appear to be going well. For one thing, the "Replication has failed!" e-mails have stopped coming. =) But more importantly, regardless of errors I stated earlier when trying to manually initiate replication, my Push box has been at ~100 MB/s TX and the Pull box has been at the same, RX'ing. I'm up to about 360 GB right now (of a total of 4.1 TB), and things seem to be going well. So, I guess the documentation was accurate. =)

But, a few more questions :

1. Is there any way I can SEE this replicated data? I mean - I've got a CIFS share that points to the Pull box, but it points to another dataset. Is it safe for me to create a CIFS share that points to the replication dataset, to at least see and monitor the progress?

2. Although I've looked through the documentation, I seem not to be able to find any description of what to do should I ever need to call on the Pull box to restore the Push one, if a catastrophe occurs. Anybody care to point me in the right direction?

Cheers!
 

Oko

Contributor
Joined
Nov 30, 2013
Messages
132
1. Is there any way I can SEE this replicated data? I mean - I've got a CIFS share that points to the Pull box, but it points to another dataset. Is it safe for me to create a CIFS share that points to the replication dataset, to at least see and monitor the progress?

You have not read much about replication! Do the following test. Create two virtual instances of FreeNAS or FreeBSD for that matter in the Virtual Box. Make interfaces internal so that instances can see each other. Start doing remote replication and see if you see anything on the target data set.

Hint: You will see nothing until replication is successfully finished. That is how replication works. It is all or nothing. You don't get to see the progress.

2. Although I've looked through the documentation, I seem not to be able to find any description of what to do should I ever need to call on the Pull box to restore the Push one, if a catastrophe occurs. Anybody care to point me in the right direction?

Cheers!

If the replication is a copy of your original pool you can just do replication from Pull box to the new computer after Push box is dead.
 
Last edited:

SilverJS

Patron
Joined
Jun 28, 2011
Messages
255
Disregard.
 
Last edited:
Status
Not open for further replies.
Top