Richard Durso
Explorer
- Joined
- Jan 30, 2014
- Messages
- 70
Running TrueNAS-12.0-U1.1
I'm using the Replication Task to replicate my encrypted periodic snapshots to a remote Ubuntu 20.04.1 system via SSH. This all works wonderful..... until I need to reboot the Ubuntu server. Upon reboot it will prompt me for the password of every replicated dataset (which I don't want). The backups are in the"rpool" pool which I think is part of the problem.
As a work around, I set all the child replicated datasets of "rpool/backups" to "canmount=noauto". Rebooted and password issue fixed. It only asks me for the ZFS on Root password and boots. Then to inspect the data when needed, I just use a simple "zfs load-key rpool/backups/<dataset>" and "zfs mount rpool/backups/<dataset>" and I can view the backups.
This seemed great until I started getting system alerts from TrueNAS that the replications failed with "Error 'noauto' " (see below). For now I set "canmount=off" for the the replicated datasets and all the replicated tasks cleared and seem to have no problem with "off". I assume this is a bug and it just doesn't expect "noauto".
Now I have to do three steps instead of two to inspect the replicated copy:
Then I can view the backups... and of course the 3 equivalent steps to undo the above so really 6 steps per dataset.
I'm using the Replication Task to replicate my encrypted periodic snapshots to a remote Ubuntu 20.04.1 system via SSH. This all works wonderful..... until I need to reboot the Ubuntu server. Upon reboot it will prompt me for the password of every replicated dataset (which I don't want). The backups are in the"rpool" pool which I think is part of the problem.
As a work around, I set all the child replicated datasets of "rpool/backups" to "canmount=noauto". Rebooted and password issue fixed. It only asks me for the ZFS on Root password and boots. Then to inspect the data when needed, I just use a simple "zfs load-key rpool/backups/<dataset>" and "zfs mount rpool/backups/<dataset>" and I can view the backups.
This seemed great until I started getting system alerts from TrueNAS that the replications failed with "Error 'noauto' " (see below). For now I set "canmount=off" for the the replicated datasets and all the replicated tasks cleared and seem to have no problem with "off". I assume this is a bug and it just doesn't expect "noauto".
Code:
Error 'noauto'. Logs [2021/01/27 12:00:00] INFO [Thread-2739] [zettarepl.paramiko.replication_task__task_3] Connected (version 2.0, client OpenSSH_8.2p1) [2021/01/27 12:00:00] INFO [Thread-2739] [zettarepl.paramiko.replication_task__task_3] Authentication (publickey) successful! [2021/01/27 12:00:02] INFO [replication_task__task_3] [zettarepl.replication.run] For replication task 'task_3': doing push from 'main/docker' to 'rpool/backups/docker' of snapshot='auto-20210127.1200-1h' incremental_base='auto-20210127.1100-1h' receive_resume_token=None encryption=False [2021/01/27 12:00:04] INFO [replication_task__task_3] [zettarepl.replication.run] For replication task 'task_3': doing push from 'main/docker/projects' to 'rpool/backups/docker/projects' of snapshot='auto-20210127.1200-1h' incremental_base='auto-20210127.1100-1h' receive_resume_token=None encryption=False [2021/01/27 12:00:05] INFO [replication_task__task_3] [zettarepl.replication.run] For replication task 'task_3': doing push from 'main/docker/projects/gitea_data' to 'rpool/backups/docker/projects/gitea_data' of snapshot='auto-20210127.1200-1h' incremental_base='auto-20210127.1100-1h' receive_resume_token=None encryption=False [2021/01/27 12:00:07] INFO [replication_task__task_3] [zettarepl.replication.run] For replication task 'task_3': doing push from 'main/docker/projects/nzb_data' to 'rpool/backups/docker/projects/nzb_data' of snapshot='auto-20210127.1200-1h' incremental_base='auto-20210127.1100-1h' receive_resume_token=None encryption=False [2021/01/27 12:00:08] INFO [replication_task__task_3] [zettarepl.replication.run] For replication task 'task_3': doing push from 'main/docker/projects/trilium_data' to 'rpool/backups/docker/projects/trilium_data' of snapshot='auto-20210127.1200-1h' incremental_base='auto-20210127.1100-1h' receive_resume_token=None encryption=False [2021/01/27 12:00:10] ERROR [replication_task__task_3] [zettarepl.replication.run] For task 'task_3' unhandled replication error KeyError('noauto') Traceback (most recent call last): File "/usr/local/lib/python3.8/site-packages/zettarepl/replication/run.py", line 154, in run_replication_tasks ... 6 more lines ... return [ File "/usr/local/lib/python3.8/site-packages/zettarepl/dataset/list.py", line 33, in { File "/usr/local/lib/python3.8/site-packages/zettarepl/dataset/list.py", line 34, in property: parse_property(value, properties[property]) File "/usr/local/lib/python3.8/site-packages/zettarepl/transport/zfscli/__init__.py", line 105, in parse_property return type(value) File "/usr/local/lib/python3.8/site-packages/zettarepl/transport/zfscli/parse.py", line 10, in zfs_bool return { KeyError: 'noauto'
Now I have to do three steps instead of two to inspect the replicated copy:
Code:
zfs set canmount=on rpool/backups/<dataset> zfs load-key rpool/backups/<dataset> zfs mount rpool/backups/<dataset>
Then I can view the backups... and of course the 3 equivalent steps to undo the above so really 6 steps per dataset.