How can I setup manual replication without the use of the GUI?

Status
Not open for further replies.

electricd7

Explorer
Joined
Jul 16, 2012
Messages
81
Hello,

I currently have a powershell script that runs nightly as part of my VMware maintenance which creates a nightly snapshot of my freenas volume. I would like to fire off a replication of that snapshot to a secondary FreeNAS as part of that process. I understand how to make replication work within the GUI by setting up an automated snapshot, then setting up the replication task, but I would like to do this without the automated snapshot task since I am doing this on my own.

The command I am running from console is:

zfs send mnt/vmds01@nightly.0 | ssh -i /data/ssh/replication 192.168.200.14 zfs receive mnt/rn_vmds01@nightly.0

I am pretty sure that would work if I had the SSH keys portion setup correctly. How can I manually setup the SSH keys so that I can login to the PULL FreeNAS box from within the PUSH FreeNAS box from console without using the GUI?

ED7
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I believe if you put the keys in the GUI it should work for you even though you are initiating the transfer manually.
 

electricd7

Explorer
Joined
Jul 16, 2012
Messages
81
I thought of that, but I can't put in the replication key because I don't have an automated snapshot task. Unless I am missing something, can you be more specific as to where I put the keys??
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Does section 6.2.1 of the manual not do what you want it to do? It seems that 6.2.1 configures the PULL machine with the SSH key. Am I confused?
 

electricd7

Explorer
Joined
Jul 16, 2012
Messages
81
Yea, thats what I am following. It *kinda* works, but always asks me to verify the RSA key. It never asks me for the password, but always asks me to verify the key...see log out put:

ssh -vv -i /data/ssh/replication 192.168.201.102
OpenSSH_5.4p1_hpn13v11 FreeBSD-20100308, OpenSSL 0.9.8y 5 Feb 2013
debug1: Reading configuration data /etc/ssh/ssh_config
debug2: ssh_connect: needpriv 0
debug1: Connecting to 192.168.201.102 [192.168.201.102] port 22.
debug1: Connection established.
debug1: permanently_set_uid: 0/0
Could not create directory '/root/.ssh'.
debug2: key_type_from_name: unknown key type '-----BEGIN'
debug2: key_type_from_name: unknown key type '-----END'
debug1: identity file /data/ssh/replication type 1
debug1: identity file /data/ssh/replication-cert type -1
debug1: Remote protocol version 2.0, remote software version OpenSSH_5.4p1_hpn13v11 FreeBSD-20100308
debug1: match: OpenSSH_5.4p1_hpn13v11 FreeBSD-20100308 pat OpenSSH*
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_5.4p1_hpn13v11 FreeBSD-20100308
debug2: fd 4 setting O_NONBLOCK
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1
debug2: kex_parse_kexinit: ssh-rsa-cert-v00@openssh.com,ssh-dss-cert-v00@openssh.com,ssh-rsa,ssh-dss
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,rijndael-cbc@lysator.liu.se
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,rijndael-cbc@lysator.liu.se
debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,umac-64@openssh.com,hmac-ripemd160,hmac-ripemd160@openssh.com,hmac-sha1-96,hmac-md5-96
debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,umac-64@openssh.com,hmac-ripemd160,hmac-ripemd160@openssh.com,hmac-sha1-96,hmac-md5-96
debug2: kex_parse_kexinit: none,zlib@openssh.com,zlib
debug2: kex_parse_kexinit: none,zlib@openssh.com,zlib
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit: first_kex_follows 0
debug2: kex_parse_kexinit: reserved 0
debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1
debug2: kex_parse_kexinit: ssh-rsa,ssh-dss
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,rijndael-cbc@lysator.liu.se
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,rijndael-cbc@lysator.liu.se
debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,umac-64@openssh.com,hmac-ripemd160,hmac-ripemd160@openssh.com,hmac-sha1-96,hmac-md5-96
debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,umac-64@openssh.com,hmac-ripemd160,hmac-ripemd160@openssh.com,hmac-sha1-96,hmac-md5-96
debug2: kex_parse_kexinit: none
debug2: kex_parse_kexinit: none
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit:
debug2: kex_parse_kexinit: first_kex_follows 0
debug2: kex_parse_kexinit: reserved 0
debug2: mac_setup: found hmac-md5
debug1: kex: server->client aes128-ctr hmac-md5 none
debug2: mac_setup: found hmac-md5
debug1: kex: client->server aes128-ctr hmac-md5 none
debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP
debug2: dh_gen_key: priv key bits set: 119/256
debug2: bits set: 502/1024
debug1: SSH2_MSG_KEX_DH_GEX_INIT sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY
debug2: no key of type 0 for host 192.168.201.102
debug2: no key of type 2 for host 192.168.201.102
The authenticity of host '192.168.201.102 (192.168.201.102)' can't be established.
RSA key fingerprint is 9e:b5:48:d5:8c:37:da:bc:2c:6d:8f:e5:e6:e2:e1:a6.
Are you sure you want to continue connecting (yes/no)? yes
Failed to add the host to the list of known hosts (/root/.ssh/known_hosts).

- - - Updated - - -

Its because the /root/.ssh doesn't exist on PUSH. That directory is write protected. My guess is that the setup of the replication task in the GUI somehow makes that directory R/W and creates the known_hosts file and places the RSA key from PULL in there. Now that I have it figured out, how do I make the directory R/W so that I can write my own known_hosts file in there that will still be there after a reboot?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Check out the FAQ for the mount commands to make the USB stick writable. However, keep in mind that the files you modify might be regenerated on reboot so even if you can save the changes you want the GUI might overwrite them on next reboot. It'll take a little testing on your part to figure out how/if you can get this to work how you want.
 

electricd7

Explorer
Joined
Jul 16, 2012
Messages
81
Yea sorry, I did get that figured out. Everything works great now, I had to use mount -uw / to make the filesystem writable, then initiated my replication job and chose yes to store the key. This time the file known_hosts was created in /root/.ssh with the replication key from the PULL server. I then used mount -ur / to make the system read-only again.

Everything is working great now, however, I do think /root/.ssh/known_hosts will cease to exist upon reboot. Is there anyway to inject that into the boot-image or automatically re-create it on reboot??
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Cheap and dirty....

You could put the file on your zpool somewhere and have a cronjob that copies over your version of the file every time you bootup. There's a specific parameter for bootup only options on cronjobs.
 

titan_rw

Guru
Joined
Sep 1, 2012
Messages
586
Everything is working great now, however, I do think /root/.ssh/known_hosts will cease to exist upon reboot. Is there anyway to inject that into the boot-image or automatically re-create it on reboot??

That file *should* persist across reboots. It probably won't persist across freenas upgrades though.

Each upgrade I do on my freenas machine, I scp certain files back to the newly upgraded machine. I imagine you probably don't upgrade often enough for this to be a problem. On upgrade, either confirm the remote hosts identity manually, or scp the known_hosts file back.
 

electricd7

Explorer
Joined
Jul 16, 2012
Messages
81
SOOOO close! I am now running my script for the second time as the first time had to sync 2.4TB which took awhile. This time when trying to send the latest snapshot, I get an error on the zfs send:

[root@freenas01] ~# zfs send vmds01@nightly.0 | ssh -i /data/ssh/replication 192.168.201.102 zfs receive -F rn_vmds01@nightly.0
cannot receive new filesystem stream: destination has snapshots (eg. rn_vmds01@nightly.1)
must destroy them to overwrite it
warning: cannot send 'vmds01@nightly.0': Broken pipe

Obviously the far end has a snapshot which I sent yesterday called nightly.1 (actually it was called nightly.0, but i renamed it to nightly.1 before trying to send the latest nightly.0 snap.) What am I doing wrong?
 
Status
Not open for further replies.
Top