icyy
Cadet
- Joined
- Aug 27, 2020
- Messages
- 4
Hello,
I have noticed some strange things about this replication process.
Let's say I have the following dataset of a dataset:
backup/company/databases 928G 630M 928G 0% /mnt/backup/company/databases
backup is the base volume. I have noticed that the datasets and directories will be correctly created on the target if I do not create them on the replication target and do not specify any deeper level except the root for replication so to replicate the mentioned dataset would be:
backup/company/databases => destination backup
This works fine, it will create the company/databases and I can list all the snapshots underneath with "zfs list -t snapshot".
However when I do "df -h" on the replication target next to the backup mount point I don't see anything and 88Kb is used overall which is not true. Can't this replicate the way that everything is exactly the same on both machines, the mount points, datasets, volumes?
My other question is related to how tolerant this replication for slow links which have some interruptions in service like dsl connections?
As I noticed it is using some built in ZFS feature to do this:
csh -c /usr/bin/env lz4c -d | /sbin/zfs receive -F -d 'backup'
My experience with rsync is that you have to put it into for and while loops in this case and not even that is enough because sometimes it can just get stuck and hang forever.
My third question is related to security. As I noticed there is no way for freeNAS to pass on the geli key to the target and store it in the targets memory before making the replication. This is also a security risk. If the target server is compromised that is basically not better than having no encryption at all on the drives because it has its geli key inside what it uses to mount the main volume at boot. Did anyone find a workaround of this?
I have noticed some strange things about this replication process.
Let's say I have the following dataset of a dataset:
backup/company/databases 928G 630M 928G 0% /mnt/backup/company/databases
backup is the base volume. I have noticed that the datasets and directories will be correctly created on the target if I do not create them on the replication target and do not specify any deeper level except the root for replication so to replicate the mentioned dataset would be:
backup/company/databases => destination backup
This works fine, it will create the company/databases and I can list all the snapshots underneath with "zfs list -t snapshot".
However when I do "df -h" on the replication target next to the backup mount point I don't see anything and 88Kb is used overall which is not true. Can't this replicate the way that everything is exactly the same on both machines, the mount points, datasets, volumes?
My other question is related to how tolerant this replication for slow links which have some interruptions in service like dsl connections?
As I noticed it is using some built in ZFS feature to do this:
csh -c /usr/bin/env lz4c -d | /sbin/zfs receive -F -d 'backup'
My experience with rsync is that you have to put it into for and while loops in this case and not even that is enough because sometimes it can just get stuck and hang forever.
My third question is related to security. As I noticed there is no way for freeNAS to pass on the geli key to the target and store it in the targets memory before making the replication. This is also a security risk. If the target server is compromised that is basically not better than having no encryption at all on the drives because it has its geli key inside what it uses to mount the main volume at boot. Did anyone find a workaround of this?