Migrating legacy encryption to openzfs encryption.

Ofloo

Explorer
Joined
Jun 11, 2020
Messages
60
I don't want my pools to auto decrypt I want to manually enter the password. Now moving my data to the new pool type, I noticed that when both pools are decrypted and I do a recursive send/receive, "zfs send -R pool@now | zfs recv pool2" that the encryption doesn't inherit while i send each snapshot it does.

Is there a way to solve this?
 
Joined
Oct 22, 2019
Messages
3,641
When you created pool2 (which I'm assuming you created under TrueNAS Core 12.x) did you select encryption?

If so, did you make a backup of pool2's keys? The file name looks something like dataset_pool2_keys.json

If so, what about sending each dataset (one branch directly under pool) to live under pool2?

The command might look something like this (start with your smallest dataset as a test):
zfs send -R pool/data@now | zfs recv -v -d -x encryption pool2

Don't worry about passphrases just yet. That can be done later. However, keep in mind that the root dataset (i.e, pool2) cannot be passphrase-locked if you're using encryption and want the System Dataset to also live in pool2. Where is your System Dataset going to reside?
 
Last edited:

Ofloo

Explorer
Joined
Jun 11, 2020
Messages
60
Hmm, in deed that worked. Thank you.
System dataset has it's own set of ssd. The storage pool is encrypted. With legacy you could enter a password which actually just encrypted the key file I guess, .. not sure if this works the same with the new encryption. Since it doesn't even hide the names of the zfs datasets.

Yes I created the pool through the webinterface. I'm just planning to create a new pool move the data over from legacy But the pool needs to be encrypted. And the key/password can't stay on the system.

Edit: the key can stay on the system as long as it's encrypted of course.
 

Ofloo

Explorer
Joined
Jun 11, 2020
Messages
60
what about incremental sends, .. wil this work?

zfs send -R -i pool/data@now pool/data@now2 | zfs recv -v -d -x encryption pool2

Something really strange is going on even when not sending it recursive, ..

zfs send -i lake/private/home@auto-2021-04-14_09-20 lake/private/home@2021041701 | zfs recv -duvF lake2/private/home

Then the target is bigger then the original?

1618685351823.png


vs

1618685374335.png
 

Attachments

  • 1618685215956.png
    1618685215956.png
    10.8 KB · Views: 182
Last edited:
Joined
Oct 22, 2019
Messages
3,641
With legacy you could enter a password which actually just encrypted the key file I guess, .. not sure if this works the same with the new encryption. Since it doesn't even hide the names of the zfs datasets.

It's a completely different method of encrypting your data. You can look at it from a different angle if it helps illustrate it better:
  • GELI (legacy) encrypts the underlying partitions. These partitions are decrypted, which are then combined into vdevs, which then are part of your new (or imported) pool. If you never decrypt the partitions, you cannot continue, nor is there any available information of your pool, datasets, snapshots, metadata, structure, etc. Your ZFS pool and (datasets) running on top of GELI is (for all intents and purposes) non-encrypted. It is the lower level where encryption takes place. This is equivalent in the Linux world of encrypting a partition with dm-crypt/LUKS, and then creating an EXT4 file-system on top of this currently unlocked container. The EXT4 file-system is not encrypting anything; LUKS is. Just like in FreeNAS 11.3 and earlier, ZFS is not encrypting anything; GELI is.
  • Native ZFS encryption happens on a per-dataset level. The underlying partitions are not encrypted. This means you're not limited to "all-or-nothing" as you would be with GELI. You don't need to encrypt all datasets, and technically the "pool" is not encrypted (even if you do encrypt the top-level "root" dataset, you can still create new child datasets without encryption.) A bit old, yet still relevant, I jotted down an overview of what is hidden from outsiders when your system is powered off. The reason much is visible with native ZFS encryption is that metadata is needed to be able to create snapshots, send/recv replications, run scrubs, etc, even while the dataset is "locked". (It's quite neat and gives you more flexibility).

But the pool needs to be encrypted. And the key/password can't stay on the system.

Edit: the key can stay on the system as long as it's encrypted of course.

If you use a "Key" rather than "Passphrase", it generates a random 64-character string. This is kept on the boot device so that the datasets automatically unlock upon reboot. (It can be exported as a .json file when you "Export" your dataset keys.) At any time you can change a dataset's encryption property from "Key" to "Passphrase". (Such as when you create a pool and select "Encryption", all it is doing is encrypting the root dataset, and defautling to a "Key". You can change this to a passphrase immediately aftwards.) If you use a passphrase, nothing is stored on the boot drive, nor will the dataset unlock automatically. It's up to you to memorize the passphrase. Regardless of which method you choose, you can always switch at any time. Unlike LUKS, it's "one or the other"; you can't choose both for the same dataset. The 64-character string or the password is used to encrypt/decrypt the master key, as you guessed earlier. If you export a pool, however, then the keyfile is removed from the boot device, and you must re-upload it when you try to re-import the pool. If you don't, then it will still import your pool, but all your datasets will be "locked" and the data inaccessible until you upload the keyfile or manually enter the 64-character string.


But the pool needs to be encrypted.
Just remember that the entire "pool" is not encrypted. If you encrypt the "root" dataset, then by default all new child datasets created underneath will inherit the encryption properties of the root/parent dataset(s). You can choose to be as hybrid as you want, nesting encrypted datasets underneath non-encrypted ones, and vice versa. Up to you!
 
Joined
Oct 22, 2019
Messages
3,641
Then the target is bigger then the original?
I don't know why, but the same thing happens with me. Even though the source and target are almost the same, they vary by a tiny bit. I'm not sure if it's because the size is quickly being "estimated" or for some other technical reason?
 
Joined
Oct 22, 2019
Messages
3,641
zfs send -R -i pool/data@now pool/data@now2 | zfs recv -v -d -x encryption pool2

This depends on pool2/data already having a similar snapshot to reference. How could the incremental send work if pool2 is empty? There's nothing to start from.

Going forwards, you don't even need to use "-x encryption" or check the "encryption" box for Replication Tasks from pool2. If you use the "-w" flag or check "Include Properties", it can send the dataset as a raw stream, "as is". Meaning that the dataset can remain locked, and will be locked at the destination, requiring the same key/passphrase in order to decrypt it. :smile:
 

Ofloo

Explorer
Joined
Jun 11, 2020
Messages
60
No, I already send an earlier snapshot
Code:
# zpool list
NAME           SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
freenas-boot   216G  8.22G   208G        -         -     1%     3%  1.00x    ONLINE  -
lake          34.4T  19.7T  14.7T        -         -     2%    57%  1.00x    ONLINE  /mnt
lake2         25.5T  19.7T  5.79T        -         -     0%    77%  1.00x    ONLINE  /mnt


And now i want to sync the last bits, after I've synced everything to lake2 I want to recreate lake and copy everything back. And then add the remaining disks from lake2 to the lake pool.
 
Joined
Oct 22, 2019
Messages
3,641
Do NOT destroy lake until you are 100% sure you can lock/unlock and access your data on lake2. Even "export" lake2, re-import it, and then see if you are able to access your data.

EDIT: Actually, I'm not sure what lake nor lake2 are, so maybe it's feasible for your situation to keep lake as an emergency backup. All up to you!
 

Ofloo

Explorer
Joined
Jun 11, 2020
Messages
60
Of course I'm going to reboot and see all the data is there. But thank you for reminding me. Export/Import? that's new why would I need to able to do that? I mean what difference would it make.
 
Joined
Oct 22, 2019
Messages
3,641
Export/Import? that's new why would I need to able to do that?
Simulate changing to a new system, or popping the drives into an upgraded system. If you're able to Export the pool, then Re-Import it, and are able to unlock and access all the data again, it implies that you can do so in the future if you ever need to switch/upgrade systems or export the pool for whatever reason. (No need to touch any cables or unplug anything. It's just a quick "one-two" test run.) Better to do it now than to wait and be surprised. :wink:

You don't need to check the box for "delete shares that use this pool" when you Export/Disconnect the pool. Uncheck it.
 

Ofloo

Explorer
Joined
Jun 11, 2020
Messages
60
Apparently something went wrong with the command:
zfs send -i lake/private/home@auto-2021-04-14_09-20 lake/private/home@2021041701 | zfs recv -duvF lake2/private/home

Suddenly all my directories are empty.. just going to destroy all restart and set lake to readonly using:

zfs set readonly=on lake && zfs mount -a
 
Joined
Oct 22, 2019
Messages
3,641
Suddenly all my directories are empty
I noticed you're using the -u flag, which unmounts the destination.

Using the -d flag on the receiving side should already take lake/private/home and receive it as lake2/private/home if you only specify lake2 on the recv command.
 

Ofloo

Explorer
Joined
Jun 11, 2020
Messages
60
Would that be an accurate reading?
Code:
# zpool iostat
                capacity     operations     bandwidth
pool          alloc   free   read  write   read  write
------------  -----  -----  -----  -----  -----  -----
freenas-boot  8.22G   208G      3      4  68.8K  51.9K
lake          19.7T  14.7T    879      0   406M  9.51K
lake2         1.36T  24.1T      4    490  53.7K   396M
------------  -----  -----  -----  -----  -----  -----
 
Joined
Oct 22, 2019
Messages
3,641
I'm not sure? Is that after sending one dataset?
 
Joined
Oct 22, 2019
Messages
3,641
Don't see anything wrong, guess you'll find out. :wink: Do you think it was the -u flag that yielded "empty" directories? I wish I had been able to reply sooner, to save you the trouble.
 

Ofloo

Explorer
Joined
Jun 11, 2020
Messages
60
I think so, .. not sure I also ran zfs mount -a should of fixed it? however the backups dataset still had all its data, while i just applied the same commands to it. But that's ok, I'll get an other shot soon when I'll copy back to the new lake layout. I'll surely test it.
 
Top