Replication to wrong target

RichR

Explorer
Joined
Oct 20, 2011
Messages
77
On both source and destination I have a pool, then 5 datasets. I created the datasets on the destination manually so they would not be read-only )according to previous posts). Also, testing worked fine wen I only had the main pool with one dataset, and one dataset on the destination.

Source:
  • store1/dataset1
  • store1/host_storage
  • store1/pacs
  • store1/pp_storage
  • store1/vv_storage
Destination:
  • store1/dataset1_snaps
  • store1/host_storage_snaps
  • store1/pacs_snaps
  • store1/pp_storage_snaps
  • store1/vv_storage_snaps
I have various snapshot tasks for each source dataset (working as expected). I have 5 replication tasks, one for each of the source datasets. For each replication task, I have:
Pool/Dataset: store1/dataset1​
Remote ZFS Pool/Dataset: store1/dataset1_snaps​

etc.etc. - essentially, the snapshots for the corresponding datasets are supposed to go to their remote "datasetx_snaps." Only one of the snapshot tasks is currently creating snapshots (which IS correct because of the timing), but it's snapshots are going to the other remote datasets as well......

Why are the "store01/host_storage@auto-20190212.xxxx-xx" snapshots going not only to " store01/host_storage_snaps", but also to "store01/pacs_snaps", "store01/pp_storage_snaps" and "store01/vv_storage_snaps"

Code:
Feb 12 16:55:02 pod-11pri /autorepl.py: [tools.autorepl:291] Checking dataset store01/host_storage
Feb 12 16:55:02 pod-11pri /autorepl.py: [tools.autorepl:336] ds = host_storage_snaps/host_storage, remotefs = store01/host_storage_snaps
Feb 12 16:55:03 pod-11pri /autorepl.py: [tools.autorepl:131] Sending zfs snapshot: /sbin/zfs send -V -p store01/host_storage@auto-20190212.1552-1d | /usr/local/bin/lz4c | /usr/local/bin/pipewatcher $$ | /usr/local/bin/ssh -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 10.10.179.168 "/usr/bin/env lz4c -d | /sbin/zfs receive -F -d 'store01/host_storage_snaps' && echo Succeeded"
Feb 12 16:55:05 pod-11pri uwsgi: [middleware.notifier:178] Popen()ing: /bin/ps -a -x -w -w -o pid,command | /usr/bin/grep '^ *43451'
Feb 12 16:55:05 pod-11pri /autorepl.py: [tools.autorepl:150] Replication result: Succeeded
Feb 12 16:55:05 pod-11pri /autorepl.py: [tools.autorepl:131] Sending zfs snapshot: /sbin/zfs send -V -p -i store01/host_storage@auto-20190212.1552-1d store01/host_storage@auto-20190212.1557-1d | /usr/local/bin/lz4c | /usr/local/bin/pipewatcher $$ | /usr/local/bin/ssh -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 10.10.179.168 "/usr/bin/env lz4c -d | /sbin/zfs receive -F -d 'store01/host_storage_snaps' && echo Succeeded"
Feb 12 16:55:06 pod-11pri /autorepl.py: [tools.autorepl:150] Replication result: Succeeded
Feb 12 16:55:06 pod-11pri /autorepl.py: [tools.autorepl:131] Sending zfs snapshot: /sbin/zfs send -V -p -i store01/host_storage@auto-20190212.1557-1d store01/host_storage@auto-20190212.1602-1d | /usr/local/bin/lz4c | /usr/local/bin/pipewatcher $$ | /usr/local/bin/ssh -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 10.10.179.168 "/usr/bin/env lz4c -d | /sbin/zfs receive -F -d 'store01/host_storage_snaps' && echo Succeeded"
Feb 12 16:55:07 pod-11pri /autorepl.py: [tools.autorepl:150] Replication result: Succeeded

<trim>

Feb 12 16:55:16 pod-11pri /autorepl.py: [tools.autorepl:291] Checking dataset store01/pacs
Feb 12 16:55:17 pod-11pri /autorepl.py: [tools.autorepl:336] ds = pacs_snaps/pacs, remotefs = store01/pacs_snaps
Feb 12 16:55:18 pod-11pri /autorepl.py: [tools.autorepl:131] Sending zfs snapshot: /sbin/zfs send -V -p store01/host_storage@auto-20190212.1552-1d | /usr/local/bin/lz4c | /usr/local/bin/pipewatcher $$ | /usr/local/bin/ssh -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 10.10.179.168 "/usr/bin/env lz4c -d | /sbin/zfs receive -F -d 'store01/pacs_snaps' && echo Succeeded"
Feb 12 16:55:19 pod-11pri /autorepl.py: [tools.autorepl:150] Replication result: Succeeded
Feb 12 16:55:19 pod-11pri /autorepl.py: [tools.autorepl:131] Sending zfs snapshot: /sbin/zfs send -V -p -i store01/host_storage@auto-20190212.1552-1d store01/host_storage@auto-20190212.1557-1d | /usr/local/bin/lz4c | /usr/local/bin/pipewatcher $$ | /usr/local/bin/ssh -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 10.10.179.168 "/usr/bin/env lz4c -d | /sbin/zfs receive -F -d 'store01/pacs_snaps' && echo Succeeded"
Feb 12 16:55:20 pod-11pri /autorepl.py: [tools.autorepl:150] Replication result: Succeeded

<-trim>

Feb 12 16:55:30 pod-11pri /autorepl.py: [tools.autorepl:291] Checking dataset store01/pp_storage
Feb 12 16:55:30 pod-11pri /autorepl.py: [tools.autorepl:336] ds = pix_storage_snaps/pp_storage, remotefs = store01/pp_storage_snaps
Feb 12 16:55:31 pod-11pri /autorepl.py: [tools.autorepl:131] Sending zfs snapshot: /sbin/zfs send -V -p store01/host_storage@auto-20190212.1552-1d | /usr/local/bin/lz4c | /usr/local/bin/pipewatcher $$ | /usr/local/bin/ssh -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 10.10.179.168 "/usr/bin/env lz4c -d | /sbin/zfs receive -F -d 'store01/pp_storage_snaps' && echo Succeeded"
Feb 12 16:55:33 pod-11pri /autorepl.py: [tools.autorepl:150] Replication result: Succeeded
Feb 12 16:55:33 pod-11pri /autorepl.py: [tools.autorepl:131] Sending zfs snapshot: /sbin/zfs send -V -p -i store01/host_storage@auto-20190212.1552-1d store01/host_storage@auto-20190212.1557-1d | /usr/local/bin/lz4c | /usr/local/bin/pipewatcher $$ | /usr/local/bin/ssh -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 10.10.179.168 "/usr/bin/env lz4c -d | /sbin/zfs receive -F -d 'store01/pp_storage_snaps' && echo Succeeded"
Feb 12 16:55:33 pod-11pri /autorepl.py: [tools.autorepl:150] Replication result: Succeeded
Feb 12 16:55:33 pod-11pri /autorepl.py: [tools.autorepl:131] Sending zfs snapshot: /sbin/zfs send -V -p -i store01/host_storage@auto-20190212.1557-1d store01/host_storage@auto-20190212.1602-1d | /usr/local/bin/lz4c | /usr/local/bin/pipewatcher $$ | /usr/local/bin/ssh -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 10.10.179.168 "/usr/bin/env lz4c -d | /sbin/zfs receive -F -d 'store01/pp_storage_snaps' && echo Succeeded"
Feb 12 16:55:34 pod-11pri /autorepl.py: [tools.autorepl:150] Replication result: Succeeded

<-trim>

Feb 12 16:55:43 pod-11pri /autorepl.py: [tools.autorepl:291] Checking dataset store01/v_storage
Feb 12 16:55:44 pod-11pri /autorepl.py: [tools.autorepl:336] ds = vv_storage_snaps/vv_storage, remotefs = store01/vv_storage_snaps
Feb 12 16:55:45 pod-11pri /autorepl.py: [tools.autorepl:131] Sending zfs snapshot: /sbin/zfs send -V -p store01/host_storage@auto-20190212.1552-1d | /usr/local/bin/lz4c | /usr/local/bin/pipewatcher $$ | /usr/local/bin/ssh -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 10.10.179.168 "/usr/bin/env lz4c -d | /sbin/zfs receive -F -d 'store01/vv_storage_snaps' && echo Succeeded"
Feb 12 16:55:46 pod-11pri /autorepl.py: [tools.autorepl:150] Replication result: Succeeded
Feb 12 16:55:46 pod-11pri /autorepl.py: [tools.autorepl:131] Sending zfs snapshot: /sbin/zfs send -V -p -i store01/host_storage@auto-20190212.1552-1d store01/host_storage@auto-20190212.1557-1d | /usr/local/bin/lz4c | /usr/local/bin/pipewatcher $$ | /usr/local/bin/ssh -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 10.10.179.168 "/usr/bin/env lz4c -d | /sbin/zfs receive -F -d 'store01/vv_storage_snaps' && echo Succeeded"
Feb 12 16:55:47 pod-11pri /autorepl.py: [tools.autorepl:150] Replication result: Succeeded
Feb 12 16:55:47 pod-11pri /autorepl.py: [tools.autorepl:131] Sending zfs snapshot: /sbin/zfs send -V -p -i store01/host_storage@auto-20190212.1557-1d store01/host_storage@auto-20190212.1602-1d | /usr/local/bin/lz4c | /usr/local/bin/pipewatcher $$ | /usr/local/bin/ssh -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 10.10.179.168 "/usr/bin/env lz4c -d | /sbin/zfs receive -F -d 'store01/vv_storage_snaps' && echo Succeeded"
Feb 12 16:55:48 pod-11pri /autorepl.py: [tools.autorepl:150] Replication result: Succeeded



Again, it seemed to work fine when I was only working with 1 original dataset and 1 remote dataset.

Thanks, Rich
 

RichR

Explorer
Joined
Oct 20, 2011
Messages
77
Where is the configuration file/script for the replications tasks kept? Maybe that will give me some clues as to why this is not working correctly.
 

RichR

Explorer
Joined
Oct 20, 2011
Messages
77
I was not. I tried numerous times with different configurations (I thought maybe a different combination of manual vs semi-automatic, or different ways of implementing the keys would change the outcome). But, I had a specific target for that group of snapshots that was configured, but, it never seems to work correctly.

Then, I also found if I had a nested dataset, did not check "include child datasets.." but tried to create it's on target on the remote system, that didn't work as I expected either.

So, although not exactly how I wanted it, I've done a child dataset for one, included it in the replication task, and it's going to the same target as the parent dataset (as expected), even though I wanted the snapshots to be separated...

Also, I've reverted, unfortunately, to the old FN GUI....

Hope that answers your question.
 
D

dlavigne

Guest
To clarify, you can configure it in the legacy UI but not the new one?
 
Top