Using Replicated ZVol


Jul 16, 2012
Hello Everyone,

I am the Systems Administrator here for a medium sized manufacturing company. We currently have 2 FreeNAS 2U certified systems in different physical locations to protect against disaster scenarios. Both systems host 2 ZVols over iSCSI to 1 VM Host each. Lets call these sites Primary and Secondary, both sites have 1 VM Host and 1 Freenas. Both systems are Production and both FreeNAS' replicate to each other. Total replicated data is up to 3TB so initial replication takes a few weeks to complete over a 100Mb connection.

Currently I am trying to make a Procedure in case of a critical hardware failure at either location. For this scenario, lets say Secondary has failed. The end goal is to be able to use the last replication at the Primary site, clone it and bring up the VM's along with the Data ZVol on the Primary VM Host. This part is pretty straight forward and shouldn't cause too many issues.

The part I am having troubles with is trying to replicate the changes back over to Secondary without having to re-replicate all the data. Ideally I could just replicate over all changes on Primary Freenas since the latest snapshot on Secondary Freenas. I am not sure if this functionality is supported or not. Also I would like to be able to keep previous snapshots as we have a 60 day retention policy.

Does anyone have any experience with this type of scenario?

All suggestions are welcome.

Jul 3, 2015
I've done something similar before in practice but Im not entirely sure its something I would do in production as I haven't done it enough and I'm sure there will be many people that think this is a bad idea. Anyway that being said here we go.

In this example lets keep things simple and say I have two systems A (primary) and B (secondary/backup). I have one snapshot schedule setup on system A that says snapshot every day at 6am and keep for a month and that replicates to system B.

System A fails so I send users over to system B (by whatever method I have i.e DFS, DNS etc). By default system B's datasets are read-only so my first job is to zfs set readonly=off tank (parent dataset). Now users can start writing and modifying their data. What I would do now is disable SSH on system B as the last thing we want is system A jumping back to life and wiping out any changes. Setup the exact same snapshot schedule on system B that was on system A. Now once system A is fixed and we are assuming the zpool is still in good shape power it up and delete replication and snapshot schedule. Now enable SSH on system B and setup the reverse replication and in theory you are bag in business without having to send all the data back again.

I have done this a few times in practice and it works but it does rely on you getting every step correct. Feel free to play and I would be interested to hear your feedback to see if you got the same results as me as it has been a while since I played with this.

All the best.
Last edited: