9.3.1 Replication Issue

Status
Not open for further replies.

adrianwi

Guru
Joined
Oct 15, 2013
Messages
1,231
So everything looked fine after flashing to P20 and updating to 9.3.1 and all my jails restarted without any problems.

I am seeing a flood of replication errors in the console though, and an Red alerts message saying that replication between my 2 Freenas boxes has failed. The error refers to a snapshot back on the 20th August though, which I know moved across fine.

I have a single recursive periodic snapshot setup for the zpool on freenas1 and then a replication task to copy this across to freenas2. Here's some of the console messages:

Code:
Aug 26 21:02:01 freenas1 autorepl.py: [common.pipesubr:71] Popen()ing: /usr/bin/ssh -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 192.168.168.65 "zfs list -H -t snapshot -p -o name,creation -r 'APEpool2'"
Aug 26 21:03:01 freenas1 autorepl.py: [common.pipesubr:71] Popen()ing: /usr/bin/ssh -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 192.168.168.165 "zfs list -H -t snapshot -p -o name,creation -r 'APEpool2'"
Aug 26 21:04:02 freenas1 autorepl.py: [common.pipesubr:71] Popen()ing: /usr/bin/ssh -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 192.168.168.65 "zfs list -H -t snapshot -p -o name,creation -r 'APEpool2'"


I'm just going to leave it to see what happens when it takes the snapshot tonight, but any ideas what the problem might be?
 
D

dlavigne

Guest
I know that 9.3.1 introduced some back-end changes to the replication code. Let us know if it still fails.
 

adrianwi

Guru
Joined
Oct 15, 2013
Messages
1,231
No, I'm still getting the message in the OP and the replication task is showing yesterdays as successful and not the one for the snapshot that was taken early this morning :(
 

adrianwi

Guru
Joined
Oct 15, 2013
Messages
1,231
Right, so I thought I'd set-up my replication from scratch, as I was still having issues taking a recursive snapshot at the volume level and replicating this to the 2nd machine, as it still appears to interfere with the system settings (I'm replicating at the volume level on the receiving machine)

I destroyed all the snapshots on the receiving machine, detached the volume and recreated the volume (APEpool2) so I was stating from a blank canvas.

I deleted the periodic snapshot and replication task on the sending machine, and destroyed all the previous snapshops. I look a manual snapshot at the volume level, just so I had something to roll back to.

I then created periodic snapshots for each of the main datasets on the volume (APEpool1) and created a replication tasks for one of the datasets (media). My main machine then reported an error [CRITICAL: Replication APEpool1/media -> 192.168.168.65:APEpool2 failed: Failed: APEpool1/media (APEpool1-manual-20150827)] relating to the manual snapshot and not the automatic one that had just run from the periodic snapshot. I thought replication could only be done against periodic snapshots?

I destroyed the manual snapshot and then received the same error relating to the auto snapshot [CRITICAL: Replication APEpool1/media -> 192.168.168.65:APEpool2 failed: Failed: APEpool1/media (auto-20150827.1309-1w)]

So to me, it looks like replication is broken in 9.3.1 :( Here's the output from the console during this:

Code:
Aug 27 13:09:01 freenas1 autosnap.py: [tools.autosnap:71] Popen()ing: /sbin/zfs snapshot -r "APEpool1/media@auto-20150827.1309-1w"
Aug 27 13:11:30 freenas1 notifier: Performing sanity check on sshd configuration.
Aug 27 13:12:01 freenas1 autorepl.py: [common.pipesubr:71] Popen()ing: /usr/bin/ssh -ononeenabled=yes -ononeswitch=yes -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 192.168.168.65 "zfs list -H -t snapshot -p -o name,creation -r 'APEpool2/media'"
Aug 27 13:13:02 freenas1 autorepl.py: [common.pipesubr:71] Popen()ing: /usr/bin/ssh -ononeenabled=yes -ononeswitch=yes -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 192.168.168.65 "zfs list -H -t snapshot -p -o name,creation -r 'APEpool2/media'"
Aug 27 13:13:25 freenas1 notifier: Performing sanity check on sshd configuration.
Aug 27 13:13:33 freenas1 manage.py: [py.warnings:206] /usr/local/www/freenasUI/../freenasUI/freeadmin/middleware.py:206: DeprecationWarning: BaseException.message has been deprecated as of Python 2.6
  else unicode(excp.message)
Aug 27 13:13:38 freenas1 notifier: Error: near line 1: database is locked
Aug 27 13:14:02 freenas1 autorepl.py: [common.pipesubr:71] Popen()ing: /usr/bin/ssh -ononeenabled=yes -ononeswitch=yes -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 192.168.168.65 "zfs list -H -t snapshot -p -o name,creation -r 'APEpool2/media'"
Aug 27 13:15:02 freenas1 autorepl.py: [common.pipesubr:71] Popen()ing: /usr/bin/ssh -ononeenabled=yes -ononeswitch=yes -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 192.168.168.65 "zfs list -H -t snapshot -p -o name,creation -r 'APEpool2/media'"
Aug 27 13:15:33 freenas1 notifier: Stopping collectd.
Aug 27 13:15:35 freenas1 notifier: Waiting for PIDS: 47388.
Aug 27 13:15:35 freenas1 notifier: Starting collectd.
Aug 27 13:16:01 freenas1 autorepl.py: [common.pipesubr:71] Popen()ing: /usr/bin/ssh -ononeenabled=yes -ononeswitch=yes -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 192.168.168.65 "zfs list -H -t snapshot -p -o name,creation -r 'APEpool2/media'"
Aug 27 13:17:01 freenas1 autorepl.py: [common.pipesubr:71] Popen()ing: /usr/bin/ssh -ononeenabled=yes -ononeswitch=yes -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 192.168.168.65 "zfs list -H -t snapshot -p -o name,creation -r 'APEpool2/media'"
Aug 27 13:18:01 freenas1 autorepl.py: [common.pipesubr:71] Popen()ing: /usr/bin/ssh -ononeenabled=yes -ononeswitch=yes -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 192.168.168.65 "zfs list -H -t snapshot -p -o name,creation -r 'APEpool2/media'"
Aug 27 13:19:01 freenas1 autorepl.py: [common.pipesubr:71] Popen()ing: /usr/bin/ssh -ononeenabled=yes -ononeswitch=yes -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 192.168.168.65 "zfs list -H -t snapshot -p -o name,creation -r 'APEpool2/media'"
Aug 27 13:20:01 freenas1 autorepl.py: [common.pipesubr:71] Popen()ing: /usr/bin/ssh -ononeenabled=yes -ononeswitch=yes -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 192.168.168.65 "zfs list -H -t snapshot -p -o name,creation -r 'APEpool2/media'"
Aug 27 13:21:01 freenas1 autorepl.py: [common.pipesubr:71] Popen()ing: /usr/bin/ssh -ononeenabled=yes -ononeswitch=yes -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 192.168.168.65 "zfs list -H -t snapshot -p -o name,creation -r 'APEpool2/media'"
Aug 27 13:22:01 freenas1 autorepl.py: [common.pipesubr:71] Popen()ing: /usr/bin/ssh -ononeenabled=yes -ononeswitch=yes -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 192.168.168.65 "zfs list -H -t snapshot -p -o name,creation -r 'APEpool2/media'"
Aug 27 13:23:02 freenas1 autorepl.py: [common.pipesubr:71] Popen()ing: /usr/bin/ssh -ononeenabled=yes -ononeswitch=yes -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 192.168.168.65 "zfs list -H -t snapshot -p -o name,creation -r 'APEpool2/media'"
Aug 27 13:24:02 freenas1 autorepl.py: [common.pipesubr:71] Popen()ing: /usr/bin/ssh -ononeenabled=yes -ononeswitch=yes -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 192.168.168.65 "zfs list -H -t snapshot -p -o name,creation -r 'APEpool2/media'"
Aug 27 13:25:02 freenas1 autorepl.py: [common.pipesubr:71] Popen()ing: /usr/bin/ssh -ononeenabled=yes -ononeswitch=yes -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 192.168.168.65 "zfs list -H -t snapshot -p -o name,creation -r 'APEpool2/media'"
Aug 27 13:26:02 freenas1 autorepl.py: [common.pipesubr:71] Popen()ing: /usr/bin/ssh -ononeenabled=yes -ononeswitch=yes -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 192.168.168.65 "zfs list -H -t snapshot -p -o name,creation -r 'APEpool2/media'"
Aug 27 13:27:02 freenas1 autorepl.py: [common.pipesubr:71] Popen()ing: /usr/bin/ssh -ononeenabled=yes -ononeswitch=yes -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 192.168.168.65 "zfs list -H -t snapshot -p -o name,creation -r 'APEpool2/media'"
Aug 27 13:28:02 freenas1 autorepl.py: [common.pipesubr:71] Popen()ing: /usr/bin/ssh -ononeenabled=yes -ononeswitch=yes -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 192.168.168.65 "zfs list -H -t snapshot -p -o name,creation -r 'APEpool2/media'"
Aug 27 13:29:01 freenas1 autorepl.py: [common.pipesubr:71] Popen()ing: /usr/bin/ssh -ononeenabled=yes -ononeswitch=yes -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 192.168.168.65 "zfs list -H -t snapshot -p -o name,creation -r 'APEpool2/media'"
Aug 27 13:30:01 freenas1 autorepl.py: [common.pipesubr:71] Popen()ing: /usr/bin/ssh -ononeenabled=yes -ononeswitch=yes -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 192.168.168.65 "zfs list -H -t snapshot -p -o name,creation -r 'APEpool2/media'"


UPDATE

As I've set this up slightly differently, I thought I'd try adding the dataset on APEpool2 (so APEpool2/media) and then adding this into the Remote ZFS Volume/Dataset field in the Replication Task. It now appears to be replicating, although on APEpool2 it's created another dataset (so APEpool2/media/media) which isn't really what I'm looking for!

Here's the Replication Task settings:

replication_task.png
 
Last edited:

rogerh

Guru
Joined
Apr 18, 2014
Messages
1,111
Right, so I thought I'd set-up my replication from scratch, as I was still having issues taking a recursive snapshot at the volume level and replicating this to the 2nd machine, as it still appears to interfere with the system settings (I'm replicating at the volume level on the receiving machine)

I destroyed all the snapshots on the receiving machine, detached the volume and recreated the volume (APEpool2) so I was stating from a blank canvas.

I deleted the periodic snapshot and replication task on the sending machine, and destroyed all the previous snapshops. I look a manual snapshot at the volume level, just so I had something to roll back to.

I then created periodic snapshots for each of the main datasets on the volume (APEpool1) and created a replication tasks for one of the datasets (media). My main machine then reported an error [CRITICAL: Replication APEpool1/media -> 192.168.168.65:APEpool2 failed: Failed: APEpool1/media (APEpool1-manual-20150827)] relating to the manual snapshot and not the automatic one that had just run from the periodic snapshot. I thought replication could only be done against periodic snapshots?

I destroyed the manual snapshot and then received the same error relating to the auto snapshot [CRITICAL: Replication APEpool1/media -> 192.168.168.65:APEpool2 failed: Failed: APEpool1/media (auto-20150827.1309-1w)]
So to me, it looks like replication is broken in 9.3.1 :( Here's the output from the console during this:

Code:
Aug 27 13:09:01 freenas1 autosnap.py: [tools.autosnap:71] Popen()ing: /sbin/zfs snapshot -r "APEpool1/media@auto-20150827.1309-1w"
Aug 27 13:11:30 freenas1 notifier: Performing sanity check on sshd configuration.
Aug 27 13:12:01 freenas1 autorepl.py: [common.pipesubr:71] Popen()ing: /usr/bin/ssh -ononeenabled=yes -ononeswitch=yes -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 192.168.168.65 "zfs list -H -t snapshot -p -o name,creation -r 'APEpool2/media'"
Aug 27 13:13:02 freenas1 autorepl.py: [common.pipesubr:71] Popen()ing: /usr/bin/ssh -ononeenabled=yes -ononeswitch=yes -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 192.168.168.65 "zfs list -H -t snapshot -p -o name,creation -r 'APEpool2/media'"
Aug 27 13:13:25 freenas1 notifier: Performing sanity check on sshd configuration.
Aug 27 13:13:33 freenas1 manage.py: [py.warnings:206] /usr/local/www/freenasUI/../freenasUI/freeadmin/middleware.py:206: DeprecationWarning: BaseException.message has been deprecated as of Python 2.6
  else unicode(excp.message)
Aug 27 13:13:38 freenas1 notifier: Error: near line 1: database is locked
Aug 27 13:14:02 freenas1 autorepl.py: [common.pipesubr:71] Popen()ing: /usr/bin/ssh -ononeenabled=yes -ononeswitch=yes -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 192.168.168.65 "zfs list -H -t snapshot -p -o name,creation -r 'APEpool2/media'"
Aug 27 13:15:02 freenas1 autorepl.py: [common.pipesubr:71] Popen()ing: /usr/bin/ssh -ononeenabled=yes -ononeswitch=yes -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 192.168.168.65 "zfs list -H -t snapshot -p -o name,creation -r 'APEpool2/media'"
Aug 27 13:15:33 freenas1 notifier: Stopping collectd.
Aug 27 13:15:35 freenas1 notifier: Waiting for PIDS: 47388.
Aug 27 13:15:35 freenas1 notifier: Starting collectd.
Aug 27 13:16:01 freenas1 autorepl.py: [common.pipesubr:71] Popen()ing: /usr/bin/ssh -ononeenabled=yes -ononeswitch=yes -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 192.168.168.65 "zfs list -H -t snapshot -p -o name,creation -r 'APEpool2/media'"
Aug 27 13:17:01 freenas1 autorepl.py: [common.pipesubr:71] Popen()ing: /usr/bin/ssh -ononeenabled=yes -ononeswitch=yes -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 192.168.168.65 "zfs list -H -t snapshot -p -o name,creation -r 'APEpool2/media'"
Aug 27 13:18:01 freenas1 autorepl.py: [common.pipesubr:71] Popen()ing: /usr/bin/ssh -ononeenabled=yes -ononeswitch=yes -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 192.168.168.65 "zfs list -H -t snapshot -p -o name,creation -r 'APEpool2/media'"
Aug 27 13:19:01 freenas1 autorepl.py: [common.pipesubr:71] Popen()ing: /usr/bin/ssh -ononeenabled=yes -ononeswitch=yes -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 192.168.168.65 "zfs list -H -t snapshot -p -o name,creation -r 'APEpool2/media'"
Aug 27 13:20:01 freenas1 autorepl.py: [common.pipesubr:71] Popen()ing: /usr/bin/ssh -ononeenabled=yes -ononeswitch=yes -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 192.168.168.65 "zfs list -H -t snapshot -p -o name,creation -r 'APEpool2/media'"
Aug 27 13:21:01 freenas1 autorepl.py: [common.pipesubr:71] Popen()ing: /usr/bin/ssh -ononeenabled=yes -ononeswitch=yes -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 192.168.168.65 "zfs list -H -t snapshot -p -o name,creation -r 'APEpool2/media'"
Aug 27 13:22:01 freenas1 autorepl.py: [common.pipesubr:71] Popen()ing: /usr/bin/ssh -ononeenabled=yes -ononeswitch=yes -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 192.168.168.65 "zfs list -H -t snapshot -p -o name,creation -r 'APEpool2/media'"
Aug 27 13:23:02 freenas1 autorepl.py: [common.pipesubr:71] Popen()ing: /usr/bin/ssh -ononeenabled=yes -ononeswitch=yes -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 192.168.168.65 "zfs list -H -t snapshot -p -o name,creation -r 'APEpool2/media'"
Aug 27 13:24:02 freenas1 autorepl.py: [common.pipesubr:71] Popen()ing: /usr/bin/ssh -ononeenabled=yes -ononeswitch=yes -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 192.168.168.65 "zfs list -H -t snapshot -p -o name,creation -r 'APEpool2/media'"
Aug 27 13:25:02 freenas1 autorepl.py: [common.pipesubr:71] Popen()ing: /usr/bin/ssh -ononeenabled=yes -ononeswitch=yes -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 192.168.168.65 "zfs list -H -t snapshot -p -o name,creation -r 'APEpool2/media'"
Aug 27 13:26:02 freenas1 autorepl.py: [common.pipesubr:71] Popen()ing: /usr/bin/ssh -ononeenabled=yes -ononeswitch=yes -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 192.168.168.65 "zfs list -H -t snapshot -p -o name,creation -r 'APEpool2/media'"
Aug 27 13:27:02 freenas1 autorepl.py: [common.pipesubr:71] Popen()ing: /usr/bin/ssh -ononeenabled=yes -ononeswitch=yes -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 192.168.168.65 "zfs list -H -t snapshot -p -o name,creation -r 'APEpool2/media'"
Aug 27 13:28:02 freenas1 autorepl.py: [common.pipesubr:71] Popen()ing: /usr/bin/ssh -ononeenabled=yes -ononeswitch=yes -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 192.168.168.65 "zfs list -H -t snapshot -p -o name,creation -r 'APEpool2/media'"
Aug 27 13:29:01 freenas1 autorepl.py: [common.pipesubr:71] Popen()ing: /usr/bin/ssh -ononeenabled=yes -ononeswitch=yes -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 192.168.168.65 "zfs list -H -t snapshot -p -o name,creation -r 'APEpool2/media'"
Aug 27 13:30:01 freenas1 autorepl.py: [common.pipesubr:71] Popen()ing: /usr/bin/ssh -ononeenabled=yes -ononeswitch=yes -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 192.168.168.65 "zfs list -H -t snapshot -p -o name,creation -r 'APEpool2/media'"


UPDATE

As I've set this up slightly differently, I thought I'd try adding the dataset on APEpool2 (so APEpool2/media) and then adding this into the Remote ZFS Volume/Dataset field in the Replication Task. It now appears to be replicating, although on APEpool2 it's created another dataset (so APEpool2/media/media) which isn't really what I'm looking for!

Here's the Replication Task settings:

replication_task.png
"APEpool2/media/media" may not be what you want, but it all seems now to be working as specified! Why not make a dataset on APEpool2 called 'backup' and put 'APEpool2/backup' in the replication destination field. The reason I say this is to disambiguate whether there is anything special about the destination dataset having the same name as the source dataset, and also because my travails with replication were much simplified when I did not try to replicate to the root volume on the destination server. There may be nothing in this, but it would be quite easy to try!
 

rogerh

Guru
Joined
Apr 18, 2014
Messages
1,111
If you mean me, from AdrianWillamson's post at 13.31 yesterday, immediately before your previous post.
 

gunnarsson

Cadet
Joined
Sep 22, 2014
Messages
9
I have been getting the "autorepl.py: [common.pipesubr:71] Popen()ing:" messages since I ran the updates on my 2 boxes which are replicating between each other. Am happy to see that it's not just me who is experiencing this.

There appear to be some further problems, as one of my boxes is now replicating a month-old snapshot which already existed on the receiving end. This is causing some heavy, unwanted network traffic. I am also concerned that this is going to derail my replication and cause me to have to start again from scratch.
 
Last edited:

nickt

Contributor
Joined
Feb 27, 2015
Messages
131
I'm getting this too - every minute an entry in the log from autorepl.py, for example:

Code:
Sep 2 13:47:09 Saturn autorepl.py: [common.pipesubr:71] Popen()ing: /usr/bin/ssh -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 localhost "zfs list -H -t snapshot -p -o name,creation -d 1 -r 'puddle/Titan/Archive'"


Looking at the logs, this started following the upgrade to 9.3.1 (FreeNAS-9.3-STABLE-201508250051).

Looking at the Storage -> Replication Tasks tab, "Success" is the reported status against each replication task, but the "last snapshot sent to the remote side" is for the last snapshot taken before the upgrade to 9.3.1.

What's interesting is that the replication tasks *appear* to be working. If I execute the command recorded in the log (it is a zfs list command), I can see that all snapshots on the remote side are up to date. Looking at the remote volume itself, recent files are appearing.

So I guess something is not right with the auto-replication functionality, although it appears to be partially working for me. But it would be good if this were fixed - i get nervous when GUI is out of sync and log messages are backing up...
 

nickt

Contributor
Joined
Feb 27, 2015
Messages
131
For what it's worth, still seeing the same issue in the latest update on the stable train (FreeNAS-9.3-STABLE-201509022158).
 
Status
Not open for further replies.
Top