Replication's SSH key used to wipe remote FN system? (temp auth token also)

SMnasMAN

Contributor
Joined
Dec 2, 2018
Messages
177
As Freenas to freenas replication uses SSH and SSH keys (i think, even when using "semi-auto" + a temp auth token) , couldn't a hacker who gains root access to your main FN server (replication source), then use that same ssh key to remote into your replication target, and wipe your replicated data?

Do i have this above correct?

(and if correct- could using the "Dedicated User" option of FN's replication, mitigate against this scenario? or does any user, used for replication require the privs to r/w/wipe the remote datastores?)

thanks
 

SMnasMAN

Contributor
Joined
Dec 2, 2018
Messages
177
anyone have any input on this? i would think this is pretty important concern as its often a usecase for FN replication (ie protection against ransomware / RAT type attacks).
tks
 

styno

Patron
Joined
Apr 11, 2016
Messages
466
If that is your concern, setup the replication system to pull from the source and secure it adequate.
 

blueether

Patron
Joined
Aug 6, 2018
Messages
259
I seem to remember talk about this several months ago and it was suggested to use pull vs push
 

SMnasMAN

Contributor
Joined
Dec 2, 2018
Messages
177
Thanks for the 2x replies. I did a few searches on this prior to posting but could not find anything related (i must have missed the other threads).

So in terms of using zfs/FN replication in a PULL setup vs a push, is this something i would have to script and/or do via the cli (and cron)? im asking as i dont see a way to do a PULL setup via the FN GUI.

(also im guessing that im am correct about security concern re: the sync source having root access (via its replication pubkey) to the sync dst box, right? - thats not a criticism of FN as i know this is just how pub/priv key ssh auth works)

(for almost all of my non-zfs linux systems, a rsync PULL is exactly what i use, for these same security reasons in reguards to a PUSH. However with this FN box / large pools, using rsync is not an option.)

thanks
 

styno

Patron
Joined
Apr 11, 2016
Messages
466
Yes, rsync is a pain for a large number of files/data. The way I see it you are not concerned about security, after all, in your example, the attacker would already have elevated access on your main nas instance, owning all your (unencrypted) data. Your concern is with the way you are designing your backup process and how to mitigate the permanent loss of all your data. There is really only one good way to prevent this and that is with off-site backups on a separate (immutable?) media. Of course that requires $$, planning & logistics. If that is not possible, the next best thing would be scripting via pull and maybe even via another non-root user with sudo access to certain commands. This setup will really only diff from your already implemented rsync scripts by using ssh&zfs commands instead of rsync.
 

blueether

Patron
Joined
Aug 6, 2018
Messages
259
So in terms of using zfs/FN replication in a PULL setup vs a push, is this something i would have to script and/or do via the cli (and cron)? im asking as i don't see a way to do a PULL setup via the FN GUI.
It is in the 11.3 tree
1577301777905.png
 

SMnasMAN

Contributor
Joined
Dec 2, 2018
Messages
177
It is in the 11.3 tree

Holy @$&%Y!! that is awesome! thanks blueether (and ixsys!) I dont mess with the RC / Test builds much, so was unaware this is an upcoming new feature on 11.3! so thanks! (i did read through the entire new items on for 11.3 on jira, but i must have missed this one)


The way I see it you are not concerned about security, after all, in your example, the attacker would already have elevated access on your main nas instance, owning all your (unencrypted) data. ... This setup will really only diff from your already implemented rsync scripts by using ssh&zfs commands instead of rsync.
Thanks styno- although im not clear on why you keep bringing up rsync. Im not using rsync, and cant in my setup. This thread is about zfs replication (native). Also just bc an attacker would have elevated access on my main nas/FN instance, does not mean they should then also be able to access my 2nd box (replication destination). This is one of the most important reasons im setting up/using replication
 
Top