Replication problem after moving server

Status
Not open for further replies.

BlazeStar

Patron
Joined
Apr 6, 2014
Messages
383
Hi guys,

Using 9.2.1.9

I set up a replication task but initially, the two servers were on the same local network.

So the replication's target was 10.0.1.100

I wanted to do the initial replications over the local network to avoid unnecessary Internet trafic.

I let the server run for a week and all replications were done perfectly.

I therefore decided to take the PULL server to it's new home.

I did not change anything on the PULL.

On the PUSH side, I changed the replication task to change the target from 10.0.1.100 to DynDNS hostname, and made sure the port forwarding was done correctly.

Finally, I rescanned the the remote hostkey, which worked.

Here's the config of the replication task : http://cl.ly/image/1s0023061s1Y

Now the replication will not work giving me errors such as :

Code:
Dec  1 12:59:21 NAS autorepl.py: [tools.autorepl:414] Remote and local mismatch after replication: Data: local=auto-20141201.1000-2d vs remote=auto-20141128.2000-2d
Dec  1 12:59:21 NAS autorepl.py: [common.pipesubr:58] Popen()ing: /usr/bin/ssh -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 37951 remoteHOST.com "zfs list -Ho name -t snapshot -d 1 BACKUP | tail -n 1 | cut -d@ -f2"
Dec  1 12:59:23 NAS autorepl.py: [tools.autorepl:431] Replication of Data@auto-20141201.1000-2d failed with cannot receive new filesystem stream: destination has snapshots (eg. BACKUP@auto-20141127.1500-2d) must destroy them to overwrite it   (stdin): plzip: Write error: Broken pipe
Dec  1 13:00:01 NAS autosnap.py: [tools.autosnap:58] Popen()ing: /sbin/zfs snapshot -r -o freenas:state=NEW Data@auto-20141201.1300-2d
Dec  1 13:00:01 NAS autosnap.py: [tools.autosnap:58] Popen()ing: /sbin/zfs hold -r freenas:repl Data@auto-20141201.1300-2d
Dec  1 13:00:01 NAS autosnap.py: [tools.autosnap:58] Popen()ing: /sbin/zfs get -H freenas:state Data@auto-20141124.0800-2d
Dec  1 13:00:01 NAS autosnap.py: [tools.autosnap:58] Popen()ing: /sbin/zfs destroy -r -d Data@auto-20141124.0800-2d
Dec  1 13:00:17 NAS autorepl.py: [common.pipesubr:58] Popen()ing: /usr/bin/ssh -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 37951 remoteHOST.com "zfs list -Hr -o name -t snapshot -d 1 BACKUP | tail -n 1 | cut -d@ -f2"
Dec  1 13:00:21 NAS autorepl.py: [tools.autorepl:414] Remote and local mismatch after replication: Data: local=auto-20141201.1000-2d vs remote=auto-20141128.2000-2d
Dec  1 13:00:21 NAS autorepl.py: [common.pipesubr:58] Popen()ing: /usr/bin/ssh -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 37951 remoteHOST.com "zfs list -Ho name -t snapshot -d 1 BACKUP | tail -n 1 | cut -d@ -f2"
Dec  1 13:00:23 NAS autorepl.py: [tools.autorepl:431] Replication of Data@auto-20141201.1000-2d failed with cannot receive new filesystem stream: destination has snapshots (eg. BACKUP@auto-20141127.1500-2d) must destroy them to overwrite it   (stdin): plzip: Write error: Broken pipe


The way I see it is that all my config is OK, as in: PUSH sees PULL and can communicate with it.

The problem is that it has problem synching since it was not expecting to see data and snapshot on PULL.

Anything I can do to fix that ?


Thanks !
 

BlazeStar

Patron
Joined
Apr 6, 2014
Messages
383
No one?

Well I guess I'll just have to wipe out the target disk and start from scratch :'(
 
D

dlavigne

Guest
Does checking the box "initialize remote side for once" resolve the issue?
 

BlazeStar

Patron
Joined
Apr 6, 2014
Messages
383
Yup that's what I did but with over 2 TB to transfert, it's gonna take a WHILE ;)

That's why I did some local replications before I change the backup server location but it looks like it didn't work :'(
 

BlazeStar

Patron
Joined
Apr 6, 2014
Messages
383
Revisiting this, my replication has been running for about a day.

I wanted to see the progress (percentage of completion of the replication task.
So, on the PUSH system, I went into

STORAGE > ZFS REPLICATION

To my surprise, both "Status" and "Last snapshot sent to remote side" boxes are completely empty.
While normally I would see the progress in the "Status" box.

I know the replication is still going on simply because I monitor the connection in my pfSense firewall and I see it is still sending data steady to the PULL system.


Should I be worried or just leave it be?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I'd leave it be. If it's still doing something then it's not locked up or anything. It could be a bug in the WebGUI.
 

BlazeStar

Patron
Joined
Apr 6, 2014
Messages
383
so I did leave it be.

Until today, there was a 1 hour electricity shortage, which made the internet connection drop.

I got an email with REPLICATION FAILED.

I meant that for the last 10 days, it WAS replicating.

From my estimation (data to transfert and average transfer rate) it was about 70% completed.

But now it very much looks like it started from the beginning again.

So I think I should expect about 15 days before completion... damn that's long.

I think I'm going to cry.

I will for sure, if anything happens to the internet connection again.

Why can't the replication resume where it left :'(

f7qim.jpg
 
Last edited:
D

dlavigne

Guest
Wow, that really sucks... We've all been waiting impatiently for resumable zfs send/receive which is supposed to show up in OpenZFS real soon now.
 

BlazeStar

Patron
Joined
Apr 6, 2014
Messages
383
Hi, so first replication worked ! It took 21 days and it was successful.

I had tear of joy on january 1st, 2015, seeing that it completed successfully.

Then next scheduled replication, I get the following error:
Code:
Replication Data -> backup.remote.server failed: cannot receive incremental stream: most recent snapshot of BACKUP/Shares does not match incremental source Error 33 : Write error : cannot write compressed block


Please don't tell me I've got to start over again?

What's up with that :'(
 
D

dlavigne

Guest
Please create a bug report at bugs.freenas.org that includes that error and post the issue number here.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
How long do you keep snapshots? If you keep them for 7 days, well, you might have to send it all again because the initial replication took more than 7 days.
 
Joined
Sep 12, 2013
Messages
37
From your push system please run the following:

ssh vv i /data/ssh/replication hostname_or_ip

As the FreeNAS guide states this command should not ask for a password. If it asks for a password, SSH authentication is not
working.

I am thinking the keys are fine, and most likely your snapshots may be out of sync.

On your Push system can you please run the following:

zfs list -Ht snapshot -o name,freenas:state

On your pull system can you please run

zfs list -t snapshot

What we want to do is look at the snapshots for both systems.


If there is a common snapshot on both the push system A and the pull system B, we could do the following on system A:

zfs rollback -r -R -f dataset@common_snapshot

(Note: Any data that has been added to system after the point the common snapshot will be lost during roll back.)

On system b we change the the common snapshot marking it as the latest and start replication.
(note that, if there are child dataset(s) of dataset, the user may
have to do it over ALL children).

Then change system B's database and make common_snapshot the latest
snapshot, set freenas:state, and start the replication.

To check your state on your push system we would run the following command to check the state of the snapshots:

zfs list -Ht snapshot -o name,freenas:state

To change the state of the common snapshot to the LATEST we would run the following command:

zfs set freenas:state=LATEST data <enter common snapshot name here>

example

zfs set freenas:state=LATEST data/tank@auto-20141113.1900-2w

You can then run this command to check that the state has changed:

[root@pepper] ~# zfs list -Ht snapshot -o name,freenas:state
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
From your push system please run the following:

ssh vv i /data/ssh/replication hostname_or_ip

As the FreeNAS guide states this command should not ask for a password. If it asks for a password, SSH authentication is not
working.

I am thinking the keys are fine, and most likely your snapshots may be out of sync.

On your Push system can you please run the following:

zfs list -Ht snapshot -o name,freenas:state

On your pull system can you please run

zfs list -t snapshot

What we want to do is look at the snapshots for both systems.


If there is a common snapshot on both the push system A and the pull system B, we could do the following on system A:

zfs rollback -r -R -f dataset@common_snapshot

(Note: Any data that has been added to system after the point the common snapshot will be lost during roll back.)

On system b we change the the common snapshot marking it as the latest and start replication.
(note that, if there are child dataset(s) of dataset, the user may
have to do it over ALL children).

Then change system B's database and make common_snapshot the latest
snapshot, set freenas:state, and start the replication.

To check your state on your push system we would run the following command to check the state of the snapshots:

zfs list -Ht snapshot -o name,freenas:state

To change the state of the common snapshot to the LATEST we would run the following command:

zfs set freenas:state=LATEST data <enter common snapshot name here>

example

zfs set freenas:state=LATEST data/tank@auto-20141113.1900-2w

You can then run this command to check that the state has changed:

[root@pepper] ~# zfs list -Ht snapshot -o name,freenas:state

Whoa! Clear out of N-O-W-H-E-R-E someone with only 30 posts drops a bombshell like that! Good work bro! I am IMPRESSED! That was what I was gonna write up after he wrote back about his expiration dates. :P
 

BlazeStar

Patron
Joined
Apr 6, 2014
Messages
383
How long do you keep snapshots? If you keep them for 7 days, well, you might have to send it all again because the initial replication took more than 7 days.

Yes

My periodic snapshot tasks are set as follow:

On weekdays, take a snapshot every 4 hours => keep for 2 days
On weekdays, take a snapshot every day (at 10PM) => keep for 2 weeks
On weekday, take a snpashot every saturday => keep for 1 year

But there's something I really don't understand...

While the initial replication was running, i would see all the snapshots remaining there because they had a "hold flag", so I had an incredible amount of snapshots on PUSH.

Now it seems that they all got destroyed even though they were not replicated... I'm not sure what's up with that.

From your push system please run the following:

ssh vv i /data/ssh/replication hostname_or_ip

As the FreeNAS guide states this command should not ask for a password. If it asks for a password, SSH authentication is not
working.

First of all, thanks for the post, it helps very much!

Yup this works fine.

I am thinking the keys are fine, and most likely your snapshots may be out of sync.

On your Push system can you please run the following:

zfs list -Ht snapshot -o name,freenas:state

On your pull system can you please run

zfs list -t snapshot

What we want to do is look at the snapshots for both systems.

The root snapshot match:

On PUSH:
Code:
zfs list -t snapshot
NAME                                                                          USED  AVAIL  REFER  MOUNTPOINT
Data@auto-20141122.0600-1y                                                     88K      -   224K  -
Data@auto-20141129.2000-12m                                                    88K      -   224K  -
Data@auto-20141206.2000-12m                                                    88K      -   224K  -
Data@auto-20141213.2000-12m                                                    88K      -   224K  -
Data@auto-20141219.2300-2w                                                     88K      -   224K  -
Data@auto-20141220.2000-12m                                                    88K      -   224K  -
Data@auto-20141222.2300-2w                                                     88K      -   224K  -
Data@auto-20141223.2300-2w                                                     88K      -   224K  -
Data@auto-20141224.2300-2w                                                     88K      -   224K  -
Data@auto-20141225.2300-2w                                                     88K      -   224K  -
Data@auto-20141226.2300-2w                                                     88K      -   224K  -
Data@auto-20141227.2000-12m                                                    88K      -   224K  -
Data@auto-20141229.2300-2w                                                     88K      -   224K  -
Data@auto-20141230.2300-2w                                                     88K      -   224K  -
Data@auto-20141231.1700-2d                                                       0      -   224K  -
Data@auto-20141231.1800-2d                                                       0      -   224K  -
Data@auto-20141231.1900-2d                                                       0      -   224K  -
Data@auto-20141231.2000-2d                                                       0      -   224K  -
Data@auto-20141231.2300-2w                                                       0      -   224K  -
Data@auto-20150101.0800-2d                                                       0      -   224K  -
Data@auto-20150101.0900-2d                                                       0      -   224K  -
Data@auto-20150101.1000-2d                                                       0      -   224K  -
Data@auto-20150101.1100-2d                                                       0      -   224K  -
Data@auto-20150101.1200-2d                                                       0      -   224K  -
Data@auto-20150101.1300-2d                                                       0      -   224K  -
Data@auto-20150101.1400-2d                                                       0      -   224K  -
Data@auto-20150101.1500-2d                                                       0      -   224K  -
Data@auto-20150101.1600-2d                                                       0      -   224K  -
Data@auto-20150101.1700-2d                                                       0      -   224K  -
Data@auto-20150101.1800-2d                                                       0      -   224K  -
Data@auto-20150101.1900-2d                                                       0      -   224K  -
Data@auto-20150101.2000-2d                                                       0      -   224K  -
Data@auto-20150101.2300-2w                                                       0      -   224K  -
Data@auto-20150102.0800-2d                                                       0      -   224K  -
Data@auto-20150102.0900-2d                                                       0      -   224K  -
Data@auto-20150102.1000-2d                                                       0      -   224K  -
Data@auto-20150102.1100-2d                                                       0      -   224K  -
Data@auto-20150102.1200-2d                                                       0      -   224K  -
Data@auto-20150102.1300-2d                                                       0      -   224K  -
Data@auto-20150102.1400-2d                                                       0      -   224K  -
Data@auto-20150102.1500-2d                                                       0      -   224K  -
Data@auto-20150102.1600-2d                                                       0      -   224K  -


On PULL:
Code:
 zfs list -t snapshot
NAME                                                                                    USED  AVAIL  REFER  MOUNTPOINT
BACKUP@auto-20141122.0600-1y                                                    88K      -   224K  -
BACKUP@auto-20141129.2000-12m                                                   88K      -   224K  -
BACKUP@auto-20141206.2000-12m                                                   88K      -   224K  -
BACKUP@auto-20141213.2000-12m                                                   88K      -   224K  -
BACKUP@auto-20141219.2300-2w                                                    88K      -   224K  -
BACKUP@auto-20141220.2000-12m                                                   88K      -   224K  -
BACKUP@auto-20141222.2300-2w                                                    88K      -   224K  -
BACKUP@auto-20141223.2300-2w                                                    88K      -   224K  -
BACKUP@auto-20141224.2300-2w                                                    88K      -   224K  -
BACKUP@auto-20141225.2300-2w                                                    88K      -   224K  -
BACKUP@auto-20141226.2300-2w                                                    88K      -   224K  -
BACKUP@auto-20141227.2000-12m                                                   88K      -   224K  -
BACKUP@auto-20141229.2300-2w                                                    88K      -   224K  -
BACKUP@auto-20141230.2300-2w                                                    88K      -   224K  -
BACKUP@auto-20141231.1700-2d                                                     8K      -   224K  -
BACKUP@auto-20141231.1800-2d                                                     8K      -   224K  -
BACKUP@auto-20141231.1900-2d                                                     8K      -   224K  -
BACKUP@auto-20141231.2000-2d                                                     8K      -   224K  -
BACKUP@auto-20141231.2300-2w                                                     8K      -   224K  -
BACKUP@auto-20150101.0800-2d                                                     8K      -   224K  -
BACKUP@auto-20150101.0900-2d                                                     8K      -   224K  -
BACKUP@auto-20150101.1000-2d                                                     8K      -   224K  -
BACKUP@auto-20150101.1100-2d                                                     8K      -   224K  -
BACKUP@auto-20150101.1200-2d                                                     8K      -   224K  -
BACKUP@auto-20150101.1300-2d                                                     8K      -   224K  -
BACKUP@auto-20150101.1400-2d                                                     8K      -   224K  -
BACKUP@auto-20150101.1500-2d                                                     8K      -   224K  -
BACKUP@auto-20150101.1600-2d                                                     8K      -   224K  -
BACKUP@auto-20150101.1700-2d                                                     8K      -   224K  -
BACKUP@auto-20150101.1800-2d                                                     8K      -   224K  -
BACKUP@auto-20150101.1900-2d                                                     8K      -   224K  -
BACKUP@auto-20150101.2000-2d                                                     8K      -   224K  -
BACKUP@auto-20150101.2300-2w                                                     8K      -   224K  -
BACKUP@auto-20150102.0800-2d                                                     8K      -   224K  -
BACKUP@auto-20150102.0900-2d                                                     8K      -   224K  -
BACKUP@auto-20150102.1000-2d                                                     8K      -   224K  -
BACKUP@auto-20150102.1100-2d                                                     8K      -   224K  -
BACKUP@auto-20150102.1200-2d                                                     8K      -   224K  -
BACKUP@auto-20150102.1300-2d                                                     8K      -   224K  -
BACKUP@auto-20150102.1400-2d                                                     8K      -   224K  -
BACKUP@auto-20150102.1500-2d                                                     8K      -   224K  -
BACKUP@auto-20150102.1600-2d                                                      0      -   224K  -


Yes, they do match !

But any of the children will not match :(

For example:

On PUSH:
Code:
zfs list -t snapshot
NAME                                                                          USED  AVAIL  REFER  MOUNTPOINT
Data/.system@auto-20141122.0600-1y                                             80K      -   180K  -
Data/.system@auto-20141129.2000-12m                                            80K      -   180K  -
Data/.system@auto-20141206.2000-12m                                              0      -   180K  -
Data/.system@auto-20141213.2000-12m                                              0      -   180K  -
Data/.system@auto-20141219.2300-2w                                               0      -   180K  -
Data/.system@auto-20141220.2000-12m                                              0      -   180K  -
Data/.system@auto-20141222.2300-2w                                               0      -   180K  -
Data/.system@auto-20141223.2300-2w                                               0      -   180K  -
Data/.system@auto-20141224.2300-2w                                               0      -   180K  -
Data/.system@auto-20141225.2300-2w                                               0      -   180K  -
Data/.system@auto-20141226.2300-2w                                               0      -   180K  -
Data/.system@auto-20141227.2000-12m                                              0      -   180K  -
Data/.system@auto-20141229.2300-2w                                               0      -   180K  -
Data/.system@auto-20141230.2300-2w                                               0      -   180K  -
Data/.system@auto-20141231.1700-2d                                               0      -   180K  -
Data/.system@auto-20141231.1800-2d                                               0      -   180K  -
Data/.system@auto-20141231.1900-2d                                               0      -   180K  -
Data/.system@auto-20141231.2000-2d                                               0      -   180K  -
Data/.system@auto-20141231.2300-2w                                               0      -   180K  -
Data/.system@auto-20150101.0800-2d                                               0      -   180K  -
Data/.system@auto-20150101.0900-2d                                               0      -   180K  -
Data/.system@auto-20150101.1000-2d                                               0      -   180K  -
Data/.system@auto-20150101.1100-2d                                               0      -   180K  -
Data/.system@auto-20150101.1200-2d                                               0      -   180K  -
Data/.system@auto-20150101.1300-2d                                               0      -   180K  -
Data/.system@auto-20150101.1400-2d                                               0      -   180K  -
Data/.system@auto-20150101.1500-2d                                               0      -   180K  -
Data/.system@auto-20150101.1600-2d                                               0      -   180K  -
Data/.system@auto-20150101.1700-2d                                               0      -   180K  -
Data/.system@auto-20150101.1800-2d                                               0      -   180K  -
Data/.system@auto-20150101.1900-2d                                               0      -   180K  -
Data/.system@auto-20150101.2000-2d                                               0      -   180K  -
Data/.system@auto-20150101.2300-2w                                               0      -   180K  -
Data/.system@auto-20150102.0800-2d                                               0      -   180K  -
Data/.system@auto-20150102.0900-2d                                               0      -   180K  -
Data/.system@auto-20150102.1000-2d                                               0      -   180K  -
Data/.system@auto-20150102.1100-2d                                               0      -   180K  -
Data/.system@auto-20150102.1200-2d                                               0      -   180K  -
Data/.system@auto-20150102.1300-2d                                               0      -   180K  -
Data/.system@auto-20150102.1400-2d                                               0      -   180K  -
Data/.system@auto-20150102.1500-2d                                               0      -   180K  -
Data/.system@auto-20150102.1600-2d                                               0      -   180K  -


On PULL:
Code:
 zfs list -t snapshot
NAME                                                                                    USED  AVAIL  REFER  MOUNTPOINT
BACKUP/.system@auto-20141122.0600-1y                                              0      -   180K  -


So the only common snapshot on both system is the : @auto-20141122.0600-1y

If there is a common snapshot on both the push system A and the pull system B, we could do the following on system A:

zfs rollback -r -R -f dataset@common_snapshot

(Note: Any data that has been added to system after the point the common snapshot will be lost during roll back.)

That's the thing! I can't afford the loose all data that's been added since 2014 / 11 / 22 !!

On system b we change the the common snapshot marking it as the latest and start replication.
(note that, if there are child dataset(s) of dataset, the user may
have to do it over ALL children).

Then change system B's database and make common_snapshot the latest
snapshot, set freenas:state, and start the replication.


To check your state on your push system we would run the following command to check the state of the snapshots:

zfs list -Ht snapshot -o name,freenas:state

To change the state of the common snapshot to the LATEST we would run the following command:

zfs set freenas:state=LATEST data <enter common snapshot name here>

example

zfs set freenas:state=LATEST data/tank@auto-20141113.1900-2w

You can then run this command to check that the state has changed:

[root@pepper] ~# zfs list -Ht snapshot -o name,freenas:state


So in the end, I have a common snapshot on both systems: @auto-20141122.0600-1y

But I really can't afford to rollback to this state on PUSH.

Is there any chance I can clear all snapshots except this one on PULL and run the replication again, or this will not work?
 
Joined
Sep 12, 2013
Messages
37
Honey,

I told you wrong, you only have to roll back on your target system, the pull system.
You would roll that back to the common snapshot.
You do not do any rollbacks on the push at all.
So push system A keeps all her data, and pull system b rolls back to her common snapshot.
 

BlazeStar

Patron
Joined
Apr 6, 2014
Messages
383
So I disabled replication.

Then I ran commands such as:

Code:
zfs rollback -r -R -f BACKUP@auto-20141122.0600-1y


to rollback everything to auto-20141122.0600-1y

then:

Code:
zfs list -t snapshot


And made sure all snapshot (children, etc.) were at auto-20141122.0600-1y

Then started the replication again... at 6PM there was a periodic snapshot so the replication started:

Code:
Jan  2 18:00:02 NAS autorepl.py: [common.pipesubr:58] Popen()ing: /usr/bin/ssh -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 37951 remoteHOST.com "zfs list -Hr -o name -t snapshot -d 1 BACKUP | tail -n 1 | cut -d@ -f2" || true
Jan  2 18:00:02 NAS autorepl.py: [common.pipesubr:72] Executing: /sbin/zfs inherit freenas:state Data@auto-20150102.1700-2d
Jan  2 18:00:02 NAS autorepl.py: [common.pipesubr:72] Executing: /sbin/zfs release -r freenas:repl Data@auto-20141122.0600-1y
Jan  2 18:00:02 NAS common.pipesubr: cannot release hold from snapshot 'Data@auto-20141122.0600-1y': no such tag on this dataset
Jan  2 18:00:02 NAS autorepl.py: [common.pipesubr:72] Executing: /sbin/zfs set freenas:state=LATEST Data@auto-20141122.0600-1y


Then nothing else... and the GUI reports the same error :

Replication Data -> remoteHOST.com failed: cannot receive incremental stream: most recent snapshot of BACKUP/Shares does not match incremental source Error 33 : Write error : cannot write compressed block


UPDATE : noticed something going on the network (in pfSense).

I went on PULL and it seems that the snapshot list is getting populated now!!

So I think it IS working...!!
The GUI is not always reliable on such matters and I don't know how else to monitor the progress of replications...

Therefore, I'll just let it go for a while before doing anything else... there are several GBs of data to transfert.

Man I can't wait to have the replication catch up to today!!
 
Last edited:
Joined
Sep 12, 2013
Messages
37
Well Honey, ya sorta got to wait, be patient with the girls, you could run systeat -if 1 on both sides to watch the data going through their nics.
You can run the following command on the push and the pull system to check these girls for a for running zfs instances:
ps aux | grep zfs
 

BlazeStar

Patron
Joined
Apr 6, 2014
Messages
383
The girls are working hard, I tell you!

ps gives me replication processes and I see the snapshot list on PULL getting populated.

GUI still reports an error so that's tricky, just gotta be patient as you said!
 
Joined
Sep 12, 2013
Messages
37
Da girls just require a lil patience when commencing replication, systat -if 1 will show the activity on their nics.
 
Status
Not open for further replies.
Top