problem with setting up replication tasks?

nktech1135

Dabbler
Joined
Feb 26, 2021
Messages
18
Hi.
I'm trying to set up a replication task for some zfs pools on another machine. I set up the ssh keypair and connection succesfully.
I think my nameing schema is correct as well, but when i run the task i get the following. Note, data1 is the destination dataset.
Code:
[2021/03/09 03:08:49] INFO     [Thread-4] [zettarepl.paramiko.replication_task__task_2] Connected (version 2.0, client OpenSSH_7.9p1)
[2021/03/09 03:08:49] INFO     [Thread-4] [zettarepl.paramiko.replication_task__task_2] Authentication (publickey) successful!
[2021/03/09 03:08:49] INFO     [replication_task__task_2] [zettarepl.replication.run] No snapshots to send for replication task 'task_2' on dataset 'dpool/subvol-100-disk-0'
[2021/03/09 03:08:49] ERROR    [replication_task__task_2] [zettarepl.replication.run] For task 'task_2' unhandled replication error DatasetDoesNotExistException(1, "cannot open 'data1/cdpve/dpool/subvol-100-disk-0': dataset does not exist\n")
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/zettarepl/replication/run.py", line 158, in run_replication_tasks
    run_replication_task_part(replication_task, source_dataset, src_context, dst_context, observer)
  File "/usr/lib/python3/dist-packages/zettarepl/replication/run.py", line 248, in run_replication_task_part
    mount_dst_datasets(dst_context, target_dataset, replication_task.recursive)
  File "/usr/lib/python3/dist-packages/zettarepl/replication/run.py", line 701, in mount_dst_datasets
    dst_datasets = list_datasets_with_properties(dst_context.shell, dst_dataset, recursive, {
  File "/usr/lib/python3/dist-packages/zettarepl/dataset/list.py", line 30, in list_datasets_with_properties
    output = shell.exec(args)
  File "/usr/lib/python3/dist-packages/zettarepl/transport/zfscli/exception.py", line 29, in __exit__
    raise DatasetDoesNotExistException(exc_val.returncode, exc_val.stdout) from None
zettarepl.transport.zfscli.exception.DatasetDoesNotExistException: cannot open 'data1/cdpve/dpool/subvol-100-disk-0': dataset does not exist

Shouldn't truenas auto create missing destination datasets?
The source does contain the correct snapshots.
Code:
root@cdpve:~# zfs list -t snapshot
NAME                                                USED  AVAIL     REFER  MOUNTPOINT
dpool@cdpve-20210309060149                           56K      -      166G  -
dpool/subvol-100-disk-0@cdpve-20210309060149       2.09M      -     4.11T  -
dpool/subvol-103-disk-0@cdpve-20210309060149       8.81M      -     1.08G  -
dpool/subvol-172-disk-0@cdpve-20210309060149       72.2M      -     3.30G  -
dpool/subvol-179-disk-0@cdpve-20210309060149       58.5M      -      170G  -
dpool/vm-107-disk-0@cdpve-20210309060149           73.1M      -     32.3G  -
dpool/vm-122-disk-0@cdpve-20210309060149           12.1M      -     4.31G  -
mail@mail2-20200921135801                            56K      -       96K  -
mail@mail2-20200922085941                            56K      -       96K  -
mail@mail2-20200923131745                            56K      -       96K  -
mail@mail2-20200925053632                            56K      -       96K  -
mail@cdpve-20210309060149                            56K      -       96K  -
mail/vm-108-disk-0@mail2-20200921135801            1.50G      -     1.34T  -
mail/vm-108-disk-0@mail2-20200921140156            6.09M      -     1.34T  -
mail/vm-108-disk-0@mail2-20200921140530            36.9M      -     1.34T  -
mail/vm-108-disk-0@mail2-20200922051240            1.51G      -     1.34T  -
mail/vm-108-disk-0@mail2-20200922085941            1.70G      -     1.35T  -
mail/vm-108-disk-0@mail2-20200923131745            8.97M      -     1.35T  -
mail/vm-108-disk-0@mail2-20200925053632            24.1M      -     1.35T  -
mail/vm-108-disk-0@cdpve-20210309060149            1.54G      -     1.78T  -
rpool@cdpve-20210309060149                            0B      -      104K  -
rpool/ROOT@cdpve-20210309060149                       0B      -       96K  -
rpool/ROOT/pve-1@cdpve-20210309060149              8.66M      -     24.5G  -
rpool/data@cdpve-20210309060149                       0B      -       96K  -
rpool/data/base-106-disk-0@cdpve-20210309060149       0B      -     1.12G  -
rpool/data/subvol-100-disk-0@before_upgrade         989M      -     2.05G  -
rpool/data/subvol-100-disk-0@cdpve-20210309060149  25.1M      -     2.33G  -
rpool/data/subvol-113-disk-0@cdpve-20210309060149   472K      -     1.01G  -
rpool/data/vm-106-cloudinit@cdpve-20210309060149      0B      -       76K  -
rpool/data/vm-107-cloudinit@cdpve-20210309060149      0B      -       76K  -
rpool/data/vm-108-cloudinit@cdpve-20210309060149      0B      -       76K  -
rpool/data/vm-108-disk-0@cdpve-20210309060149       613M      -     25.4G  -
rpool/data/vm-110-disk-0@cdpve-20210309060149      33.7M      -     44.2G  -

Is this a bug? or am i missing something?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
You haven't specified your version, so it's just a guess, but there was at least one bug (I think now fixed) which would result in the issue you're talking about, so perhaps you could update to eliminate it... or provide more information so we can help to clarify.
 

nktech1135

Dabbler
Joined
Feb 26, 2021
Messages
18
You haven't specified your version, so it's just a guess, but there was at least one bug (I think now fixed) which would result in the issue you're talking about, so perhaps you could update to eliminate it... or provide more information so we can help to clarify.
I'm on 21.02, the latest version of scale.
Is there a package or something that i maybe should update?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
The bug I'm talking about was actually from the FreeNAS/TN CORE thread, but it may be similar.

No idea if it was carried across or fixed for SCALE. I admit I had missed that this was in the SCALE subforum (I browse across all with RSS, so look at subject lines and content mostly).
 

nktech1135

Dabbler
Joined
Feb 26, 2021
Messages
18
Well, this appears to be my day for making stupid mistakes. As it turns out i had a typo in the naming schema.
I'm wondering though, wouldn't it be possible to come up with a more descriptive error if this happens?
 
Top