Duplicating a dataset to a new server

j.lanham

Explorer
Joined
Aug 25, 2021
Messages
68
We got a server to act as a LIVE backup to our main TrueNAS Core server. I've been through the documentation multiple times and because we need it to be a live server, i.e. able to take over for the actual server if failure happens. Because, according to the documentation, a replication task sets it up as read only because it's a snapshot being sent to the new server, I decided to setup a rsync task to copy it over. The first time I ran it, it worked, but of course it built all of the sub-datasets under the new server as sub-folders, not data sets. Then I meticulously rebuilt all of the data sets under the new servers with the same authorities as the existing data sets on the old server. Now it fails with a "failed: Operation not permitted (1)" for every file sent. I thought maybe it was the Delay Updates flag, but after unchecking the option and re-running the sync, it failed with the same errors.

This brings up another problem I'm having with active directory authorities. Both servers are attached to our active directory domain. The old server's datasets are owned by an active directory user and group. When running rsync with preserve permissions, it created the sub-folders within the target data sets, but it's still owned by root and the group doesn't match the active directory group. The rid mapping for both systems are identical but the uids don't match across systems.

I fought with it all day yesterday, and it's frustrating that something that should be relatively easy is so frustratingly complex. Can anyone point me in the correct direction or give me some advice on how I can do this and keep the new server in sync with old?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Because, according to the documentation, a replication task sets it up as read only because it's a snapshot being sent to the new server,
Did you also see the bit in the documentation that points at the setting to not have it do that?

Destination Dataset Read-only PolicySET changes all destination datasets to readonly=on after finishing the replication. REQUIRE stops replication unless all existing destination datasets to have the property readonly=on. IGNORE disables checking the readonly property during replication.

Seems to me like you would set that value to IGNORE, then set the destination datasets to readonly=off

CAVEAT: if you do that, you must not actually change the destination datasets' contents at all or future replications will fail.
 

j.lanham

Explorer
Joined
Aug 25, 2021
Messages
68
Did you also see the bit in the documentation that points at the setting to not have it do that?

Destination Dataset Read-only PolicySET changes all destination datasets to readonly=on after finishing the replication. REQUIRE stops replication unless all existing destination datasets to have the property readonly=on. IGNORE disables checking the readonly property during replication.

Seems to me like you would set that value to IGNORE, then set the destination datasets to readonly=off

CAVEAT: if you do that, you must not actually change the destination datasets' contents at all or future replications will fail.
Yes, I did see that. Setting up a replication task to another Truenas server in terms of where you setup the ssh keypairs, looks fairly misleading. Should the keys be setup on the receiver and the public key copied to the user on the sender? The documentation is kind of vague as to which systems it sets up when. It refers to local and remote, but they are both within the same network in this case, and it seems to show setting up the ssh keys on the "local", which I gather is the from server, and copy them to the user on the remote.

Will the active directory problem affect the authorities on the snapshotted dataset? I assume it will.

What will need to be done to turn it into an active dataset on the new server if the need arises?
 

j.lanham

Explorer
Joined
Aug 25, 2021
Messages
68
After hacking around on the new server, I figured out it actually is the same uid on both servers. The rsync set it to some nonsensical number that didn't have a correlation to an active directory group. When I blew the acls down in the equivalent dataset dir -aln shows the correct uid. Why did rsync with preserve permissions set not set the owner and group? Is that not part of the permission set?
 
Joined
Jul 3, 2015
Messages
926
I do the same thing but use snapshots and replication. In the event of a disaster on the primary I would flip to the secondary and mark as read/write zpool set readonly=off pool/dataset (just make sure primary doesn’t jump back to life and replicate blowing away all your changes on the secondary).
 
Last edited:
Joined
Jul 3, 2015
Messages
926
CAVEAT: if you do that, you must not actually change the destination datasets' contents at all or future replications will fail.
I’ve never seen that before in practice. I often test our DR by accessing the replica system and writing data and replication is fine I just lose all my changes at next schedule as expected.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
I often test our DR by accessing the replica system and writing data and replication is fine I just lose all my changes at next schedule as expected.
OK; good point... not detrimental to the replication (as I had anticipated), rather detrimental to whatever is written to the replication target separately to replication.

Either way, don't write anything to the destination datasets that you aren't prepared to lose.
 

samarium

Contributor
Joined
Apr 8, 2023
Messages
192
replication probably running zfs recv -F to force rollback
if I want to keep the modified data for a while I clone the dataset snapshot and work on the clone
 
Top