Process to encrypt dataset?

IroesStrongarm

Dabbler
Joined
Mar 9, 2024
Messages
16
I've got a dataset that I want to encrypt. I've come up with what I believe would be the correct and most efficient procedure. Please let me know if I'm wrong here or if there's a better way. My path to the dataset is tank/data/appdata/immich. My plan is the following:

  1. Run command: zfs rename tank/data/appdata/immich tank/data/appdata/immich_old
  2. Create new immich dataset in GUI that's encrypted. Recreate existing ACL permissions
  3. Run command: mv /mnt/tank/data/appdata/immich_old/ /mnt/tank/data/appdata/immich/
  4. Delete immich_old in GUI
Any reason this is wrong? Is my syntax good? I believe the move command is correct to ensure I get the child directories and files without the parent folder so it all ends up exactly the same.

EDIT: Fixed up the formatting to make my intentions more explicit. Fixed the paths in my commands as well thanks to my oversite being pointed out to me.
 
Last edited:
Joined
Oct 22, 2019
Messages
3,641
Any reason this is wrong?
It can't work. I'll highlight in red that parts that don't make sense / cannot work:
  1. zfs rename tank/data/appdata/immich tank/data/appdata/immich_old
  2. Create new immich dataset in GUI that's encrypted. Recreate existing ACL permissions
  3. mv tank/data/appdata/immich_old/ tank/data/appdata/immich/
  4. Delete immich_old in GUI
The parts in red don't make sense. What do you mean by "mv". Are you implying a zfs send/recv? A zfs rename? A file-based "copy-move" of everything that exists in the old dataset?

As for ACLs, no need to mess with them if you indeed are going to replicate the old, non-encrypted dataset into a newly created encrypted one.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
mv

The Unix command. Move.

Should work well.
 

IroesStrongarm

Dabbler
Joined
Mar 9, 2024
Messages
16
It can't work. I'll highlight in red that parts that don't make sense / cannot work:
  1. zfs rename tank/data/appdata/immich tank/data/appdata/immich_old
  2. Create new immich dataset in GUI that's encrypted. Recreate existing ACL permissions
  3. mv tank/data/appdata/immich_old/ tank/data/appdata/immich/
  4. Delete immich_old in GUI
The parts in red don't make sense. What do you mean by "mv". Are you implying a zfs send/recv? A zfs rename? A file-based "copy-move" of everything that exists in the old dataset?

As for ACLs, no need to mess with them if you indeed are going to replicate the old, non-encrypted dataset into a newly created encrypted one.

I assumed (perhaps incorrectly) that the standard cp and mv commands in linux would work in TrueNAS as well. So I was going to use the mv command to transfer the data from one location to another. Would it work with the cp command and then just leave a copy in both places for me to delete the old one once done?

I haven't actually dealt with snapshot replication before. I do have regular snapshots taken, and I understand the instructions for using replication in the GUI to transfer to another TrueNAS system but haven't looked into it enough to otherwise do it locally for this purpose.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I assumed (perhaps incorrectly) that the standard cp and mv commands in linux would work in TrueNAS as well. So I was going to use the mv command to transfer the data from one location to another. Would it work with the cp command and then just leave a copy in both places for me to delete the old one once done?

I haven't actually dealt with snapshot replication before. I do have regular snapshots taken, and I understand the instructions for using replication in the GUI to transfer to another TrueNAS system but haven't looked into it enough to otherwise do it locally for this purpose.
If you use cp, it’s better to use rsync, which can correctly copy the acls etc


A benefit of rsync is you can minimize downtime by doing the rsync live. The. Shutting down the service. Do the rsync again, rename and re-enable

EDIT: Speaking from the future, OpenZFS/TrueNAS now support block cloning. The benefit of `cp` may be that you can insta-copy the files across datasets via block-cloning.
 
Last edited:

IroesStrongarm

Dabbler
Joined
Mar 9, 2024
Messages
16
mv

The Unix command. Move.

Should work well.
Yes, perhaps I should have been more explicit in my OP that steps 1 and 3 were me writing out my exact intended commands. Do you believe my intended steps listed should work for what I'm wanting to do?
 
Joined
Oct 22, 2019
Messages
3,641
I assumed (perhaps incorrectly) that the standard cp and mv commands in linux would work in TrueNAS as well. So I was going to use the mv command to transfer the data from one location to another.
Your command suggests "zfs datasets", since you left out the /mnt/ portion.


Would it work with the cp command and then just leave a copy in both places for me to delete the old one once done?
If you have enough spare space in your pool, this would seem to be the safer option.

EDIT: Or better yet, as @Stux suggests the "rsync" command.
 

IroesStrongarm

Dabbler
Joined
Mar 9, 2024
Messages
16
Your command suggests "zfs datasets", since you left out the /mnt/ portion.



If you have enough spare space in your pool, this would seem to be the safer option.

EDIT: Or better yet, as @Stux suggests the "rsync" command.
Shit, you are right, I did forget the /mnt. Thank you for reminding me of that else it certainly wouldn't have worked. I will fix that
 

IroesStrongarm

Dabbler
Joined
Mar 9, 2024
Messages
16
If you use cp, it’s better to use rsync, which can correctly copy the acls etc


A benefit of rsync is you can minimize downtime by doing the rsync live. The. Shutting down the service. Do the rsync again, rename and re-enable
Thank you for the link. Reading through it now. Would you suggest doing my initial suggestion but doing the following as my step 3?:

rsync -axHAWXS --numeric-ids --info=progress2 /mnt/tank/data/appdata/immich_old/ /mnt/tank/data/appdata/immich/
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Yes.

One of the nice things with rsync is it’s well defined behavior with trailing / on paths.
that will copy the contents of the first directory into the second.
 
Joined
Oct 22, 2019
Messages
3,641
Would you suggest doing my initial suggestion but doing the following as my step 3?:
Make sure that the root path (in the newly created dataset) has sufficient ownership/permissions. Or better yet, just run the rsync command as the root user (or use "sudo".)

Do you really need the "-S" parameter? Does the old dataset really contain many large "sparse" files? Regardless, ZFS's inline compression will make long sequences of null data in a file essentially "zero size" anyways.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Excellent. Thank you for taking the time. I'm guessing I either need to login as root or su in the terminal? No sudo for the rsync command to work properly?

Yes.

Look into running the command inside a tmux session if it’s going to run for sometime.

That prevents network issues causing a failure

And if you do use tmux you could just use the web gui shell. In scale it’s in system settings.
 

IroesStrongarm

Dabbler
Joined
Mar 9, 2024
Messages
16
Make sure that the root path (in the newly created dataset) has sufficient ownership/permissions. Or better yet, just run the rsync command as the root user (or use "sudo".)

Do you really need the "-S" parameter? Does the old dataset really contain many large "sparse" files? Regardless, ZFS's inline compression will make long sequences of null data in a file essentially "zero size" anyways.
I'll be honest, the parameters in the command I wrote was a direct copy from the link provided. I did read what each argument did, but didn't fully understand the -S, just assumed it wasn't a problem to leave it so did.

Also, I plan to make the new path have the exact same ACL permissions so they would match. But I do instead to run as sudo. I'm assuming that even still the files will maintain their original ownership and not take on that of the root user?
 

IroesStrongarm

Dabbler
Joined
Mar 9, 2024
Messages
16
Yes.

Look into running the command inside a tmux session if it’s going to run for sometime.

That prevents network issues causing a failure

And if you do use tmux you could just use the web gui shell. In scale it’s in system settings.
I appreciate the tips. Thank you!
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419

IroesStrongarm

Dabbler
Joined
Mar 9, 2024
Messages
16

Ps: tmux and rsync are pre-installed on truenas
Excellent, thanks again for providing the link. I do currently use rsync in the GUI in TrueNAS but I'm using it to backup my data to an unRAID machine. Honestly I should make that machine a second TrueNAS, but it's a homelab so I enjoy playing with the different environments, and at least part of me thinks that having my data in a traditional ZFS raidZ2 and then unRAID and it's different approach to parity is good for data resiliency.

Later this year I'll be setting up a remote TrueNAS server which is part of why I'm converting this dataset to an encrypted one.
 
Joined
Oct 22, 2019
Messages
3,641
I'm assuming that even still the files will maintain their original ownership and not take on that of the root user?
You're invoking the "-a" parameter, so yes, all ownership and permissions will be preserved. It's best to run these one-time migrations as the "root user" or with "sudo", since it will bypass any issues with multi-user/multi-group granular permissions during the rsync process.

(Imagine the rsync process informs you of a bunch of "errors" at the end if you ran it as a regular user, since it could not properly read or set the permissions for files/folders owned by other users and groups? You're rightfully feel that it was not a successful "migration". Running rsync as "root" bypasses this issue.)
 

IroesStrongarm

Dabbler
Joined
Mar 9, 2024
Messages
16
You're invoking the "-a" parameter, so yes, all ownership and permissions will be preserved. It's best to run these one-time migrations as the "root user" or with "sudo", since it will bypass any issues with multi-user, multi-group granular permissions during the rsync process.
Awesome. I really appreciate you taking the time to respond and help.
 
Top