Yeah, at least in testing. The destination site currently doesn't have enough space available to contain the source data so I haven't been able to send it up. But I did test using a 500MB test dataset and it worked fine.
I found that the snapshot referenced in the error that I quoted in step#3 is usually the snapshot that was manually imported to the destination. Not sure why that wouldn't be consistent. But looking in the /var/log/messages log at the same time as that error also shows the message regarding the actual snapshot that the source system was trying to send. By manually sending an incremental snapshot between the one that was manually transferred and the one that the source failed to replicate (zfs -i <source snapshot> <destination snapshot> | ssh root@<destination server> zfs receive <target directory>), the scheduled replication then took over successfully.
The other thing to note, at least in replications between the 9.2.1-7 source and 9.2.1-5 destination that I have to support here, is that the target directory is actually a subdirectory of the specified target with the name of the source dataset. For example, if replicating the dataset "Test-data" from the source and the replication task gets configured to use the target dataset "ZPOOL2/Test-target", the actual target directory is "ZPOOL2/Test-target/Test-data." Important to note since the manual snapshot import and incremental snapshot syncs have to use that directory.
Here is my current documentation so far:
On the source FreeNAS server:
1. Kill atime on source dataset to avoid mismatch error by the time that data arrives at destination:
Code:
zfs set atime=off ZPOOL1/Test-data
2. zfs send initial snapshot to file
Code:
zfs send <zpool>/<dataset>@<snapshot> | openssl enc -aes-256-cbc -a -salt -pass pass:<password> > <destination file>
For example, the following command saves the first snapshot (auto-20141126.1228-1w) in the "Test-data" dataset on the "ZPOOL1" zpool to the file "/data/testfile" using the encryption password "testpass":
Code:
zfs send ZPOOL1/Test-data@auto-20141126.1228-1w | openssl enc -aes-256-cbc -a -salt -pass pass:testpass > /data/mount/externaldrive1/testfile
On the destination FreeNAS server:
3. zfs receive of file into target dataset.
Code:
openssl enc -d -aes-256-cbc -a -in /data/testfile | zfs receive -F ZPOOL2/Test-target/Test-data
Note that the directory specified in the destination is actually a subdirectory of the target dataset. This is because FreeNAS configures replication tasks to use a subdirectory named after the source dataset within the dataset specified in the replication task. Therefore, the replication task has the "ZPOOL2/Test-target" defined as the target dataset but sends the snapshot of the "Test-data" dataset to the "ZPOOL2/Test-target/Test-data" directory.
Back on the source FreeNAS server:
4. Create the replication task.
5. Wait for the following error to appear in the Status column of the applicable entry in the ZFS Replication tab of the web interface:
Code:
"Replication of <snapshot> failed with cannot receive new filesystem stream: destination has snapshots (eg. <snapshot name>) must destroy them to overwrite it."
6. Open the /var/log/messages log and locate the above error message.
7. Near that error, there will be the following message:
Code:
Remote and local mismatch after replication: ZPOOL1/Test-data: local=auto-20141210.1756-1w vs remote=auto-20141126.1228-1w
8. Copy the name of the snapshot next to "local=". In the above example, the snapshot name would be "auto-20141210.1756-1w".
9. Run a manual incremental sync from the source to the destination server:
Code:
zfs send -i ZPOOL1/Test-data@auto-20141126.1228-1w ZPOOL1/Test-data@auto-20141210.1756-1w | ssh root@test-nas02 zfs receive ZPOOL2/Test-target/Test-data
This will increment between the initial snapshot manually imported into the destination (Test-data@auto-20141126.1228-1w in this example) to the snapshot that the replication task attempted and failed to send (Test-data@auto-20141210.1756-1w in this example). Again, note that the destination directory is a subdirectory within the target dataset named after the source, not the target dataset itself.
10. Ensure that replication succeeds after the next scheduled snapshot on source.