Migrating legacy encryption to openzfs encryption.

Ofloo

Explorer
Joined
Jun 11, 2020
Messages
60
Done import/export .. check if the data was present, .. all seem fine, however when booting I've noticed a lot of errors..

Code:
Importing lake2
vdev.c:129:vdev_dbgmsg(): disk vdev '/dev/gptid/3fd8269c-c854-11ea-9f0d-0025904754df.eli': vdev_geom_open: failed to open [error=2]
vdev.c:129:vdev_dbgmsg(): disk vdev '/dev/gptid/cefedfe1-c907-11ea-9f0d-0025904754df.eli': vdev_geom_open: failed to open [error=2]
vdev.c:129:vdev_dbgmsg(): disk vdev '/dev/gptid/64a71cb3-f8ee-11ea-a508-0025904754df.eli': vdev_geom_open: failed to open [error=2]
vdev.c:129:vdev_dbgmsg(): disk vdev '/dev/gptid/1d395e48-f833-11ea-a508-0025904754df.eli': vdev_geom_open: failed to open [error=2]
vdev.c:129:vdev_dbgmsg(): disk vdev '/dev/gptid/cee5d79c-c907-11ea-9f0d-0025904754df.eli': vdev_geom_open: failed to open [error=2]
vdev.c:129:vdev_dbgmsg(): disk vdev '/dev/gptid/0f302130-1d48-11eb-bd85-0025904754df.eli': vdev_geom_open: failed to open [error=2]
spa_misc.c:411:spa_load_note(): spa_load($import, config untrusted): vdev tree has 3 missing top-level vdevs.
spa_misc.c:411:spa_load_note(): spa_load($import, config untrusted): current settings allow for maximum 2 missing top-level vdevs at this stage.
spa_misc.c:396:spa_load_failed(): spa_load($import, config untrusted): FAILED: unable to open vdev tree [error=2]
vdev.c:183:vdev_dbgmsg_print_tree():   vdev 0: root, guid: 915620538472722083, path: N/A, can't open
vdev.c:183:vdev_dbgmsg_print_tree():     vdev 0: mirror, guid: 14581729691160629520, path: N/A, can't open
vdev.c:183:vdev_dbgmsg_print_tree():       vdev 0: disk, guid: 16116389798819435027, path: /dev/gptid/64a71cb3-f8ee-11ea-a508-0025904754df.eli, can't open
vdev.c:183:vdev_dbgmsg_print_tree():       vdev 1: disk, guid: 14794953279794194630, path: /dev/gptid/1d395e48-f833-11ea-a508-0025904754df.eli, can't open
vdev.c:183:vdev_dbgmsg_print_tree():     vdev 1: mirror, guid: 4420925059830701253, path: N/A, can't open
vdev.c:183:vdev_dbgmsg_print_tree():       vdev 0: disk, guid: 8626768977962905497, path: /dev/gptid/3fd8269c-c854-11ea-9f0d-0025904754df.eli, can't open
vdev.c:183:vdev_dbgmsg_print_tree():       vdev 1: disk, guid: 6330681829550882045, path: /dev/gptid/0f302130-1d48-11eb-bd85-0025904754df.eli, can't open
vdev.c:183:vdev_dbgmsg_print_tree():     vdev 2: mirror, guid: 11180815544697389582, path: N/A, can't open
vdev.c:183:vdev_dbgmsg_print_tree():       vdev 0: disk, guid: 5385716001711711582, path: /dev/gptid/cee5d79c-c907-11ea-9f0d-0025904754df.eli, can't open
vdev.c:183:vdev_dbgmsg_print_tree():       vdev 1: disk, guid: 17431068620542736537, path: /dev/gptid/cefedfe1-c907-11ea-9f0d-0025904754df.eli, can't open
spa_misc.c:411:spa_load_note(): spa_load($import, config untrusted): UNLOADING
spa.c:6138:spa_tryimport(): spa_tryimport: importing lake2
spa.c:6143:spa_tryimport(): spa_tryimport: using cachefile '/data/zfs/zpool.cache.saved'
spa_misc.c:411:spa_load_note(): spa_load($import, config trusted): LOADING
vdev.c:129:vdev_dbgmsg(): disk vdev '/dev/gptid/9b30d724-4c44-11eb-aac2-0025904754df': best uberblock found for spa $import. txg 996882
spa_misc.c:411:spa_load_note(): spa_load($import, config untrusted): using uberblock with txg=996882
spa.c:8187:spa_async_request(): spa=$import async request task=2048
spa_misc.c:411:spa_load_note(): spa_load($import, config trusted): LOADED
spa_misc.c:411:spa_load_note(): spa_load($import, config trusted): UNLOADING
spa.c:5990:spa_import(): spa_import: importing lake2
spa_misc.c:411:spa_load_note(): spa_load(lake2, config trusted): LOADING
vdev.c:129:vdev_dbgmsg(): disk vdev '/dev/gptid/9a94062e-4c44-11eb-aac2-0025904754df': best uberblock found for spa lake2. txg 996882
spa_misc.c:411:spa_load_note(): spa_load(lake2, config untrusted): using uberblock with txg=996882
spa_misc.c:411:spa_load_note(): spa_load(lake2, config trusted): read 51 log space maps (51 total blocks - blksz = 131072 bytes) in 154 ms
mmp.c:241:mmp_thread_start(): MMP thread started pool 'lake2' gethrtime 30268331687
spa.c:8187:spa_async_request(): spa=lake2 async request task=1
spa.c:8187:spa_async_request(): spa=lake2 async request task=2048
spa_misc.c:411:spa_load_note(): spa_load(lake2, config trusted): LOADED
metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 996884, spa lake2, vdev_id 0, ms_id 6, smp_length 680, unflushed_allocs 0, unflushed_frees 4096, freed 0, defer 0 + 0, unloaded time 30278 ms, loading_time 8 ms, ms_max_size 17179869184, max size err
metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 996884, spa lake2, vdev_id 0, ms_id 18, smp_length 1016, unflushed_allocs 0, unflushed_frees 0, freed 0, defer 0 + 0, unloaded time 30287 ms, loading_time 0 ms, ms_max_size 17179869184, max size erro
metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 996884, spa lake2, vdev_id 0, ms_id 20, smp_length 992, unflushed_allocs 0, unflushed_frees 0, freed 0, defer 0 + 0, unloaded time 30288 ms, loading_time 0 ms, ms_max_size 17179869184, max size error
GEOM_MIRROR: Device mirror/swap0 launched (2/2).
GEOM_ELI: Device mirror/swap0.eli created.
GEOM_ELI: Encryption: AES-XTS 128
GEOM_ELI:     Crypto: hardware
metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 996884, spa lake2, vdev_id 0, ms_id 25, smp_length 169608, unflushed_allocs 212992, unflushed_frees 94208, freed 0, defer 0 + 0, unloaded time 30288 ms, loading_time 31 ms, ms_max_size 15741296640, m
metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 996884, spa lake2, vdev_id 0, ms_id 28, smp_length 29760, unflushed_allocs 204800, unflushed_frees 172032, freed 0, defer 0 + 0, unloaded time 30319 ms, loading_time 15 ms, ms_max_size 16792944640, m
metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 996884, spa lake2, vdev_id 0, ms_id 29, smp_length 42624, unflushed_allocs 331776, unflushed_frees 143360, freed 0, defer 0 + 0, unloaded time 30335 ms, loading_time 11 ms, ms_max_size 16569024512, m
metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 996884, spa lake2, vdev_id 0, ms_id 30, smp_length 78048, unflushed_allocs 344064, unflushed_frees 143360, freed 0, defer 0 + 0, unloaded time 30347 ms, loading_time 3 ms, ms_max_size 16836927488, ma
spa_history.c:309:spa_history_log_sync(): txg 996884 open pool version 5000; software version unknown; uts  12.2-RELEASE-p6 1202000 amd64
metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 996884, spa lake2, vdev_id 0, ms_id 519, smp_length 1952, unflushed_allocs 1585152, unflushed_frees 122880, freed 4096, defer 0 + 0, unloaded time 30353 ms, loading_time 10 ms, ms_max_size 5337391104
metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 996884, spa lake2, vdev_id 0, ms_id 526, smp_length 1728, unflushed_allocs 2879488, unflushed_frees 569344, freed 4096, defer 0 + 0, unloaded time 30363 ms, loading_time 10 ms, ms_max_size 6921289728
metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 996884, spa lake2, vdev_id 0, ms_id 528, smp_length 1816, unflushed_allocs 1409024, unflushed_frees 303104, freed 0, defer 8192 + 0, unloaded time 30555 ms, loading_time 10 ms, ms_max_size 8581771264
metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 996884, spa lake2, vdev_id 0, ms_id 530, smp_length 664, unflushed_allocs 0, unflushed_frees 0, freed 0, defer 0 + 0, unloaded time 30555 ms, loading_time 20 ms, ms_max_size 8589844480, max size erro
metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 996884, spa lake2, vdev_id 0, ms_id 529, smp_length 1672, unflushed_allocs 7114752, unflushed_frees 561152, freed 0, defer 106496 + 0, unloaded time 30555 ms, loading_time 31 ms, ms_max_size 85781463
metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 996885, spa lake2, vdev_id 0, ms_id 527, smp_length 1496, unflushed_allocs 1409024, unflushed_frees 57344, freed 0, defer 8192 + 0, unloaded time 30555 ms, loading_time 34 ms, ms_max_size 5609684992,
spa.c:8187:spa_async_request(): spa=lake2 async request task=32
metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 996885, spa lake2, vdev_id 0, ms_id 534, smp_length 928, unflushed_allocs 0, unflushed_frees 0, freed 0, defer 0 + 0, unloaded time 30590 ms, loading_time 17 ms, ms_max_size 8589848576, max size erro
metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 996885, spa lake2, vdev_id 0, ms_id 531, smp_length 1280, unflushed_allocs 0, unflushed_frees 0, freed 0, defer 0 + 0, unloaded time 30566 ms, loading_time 49 ms, ms_max_size 8589828096, max size err
metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 996885, spa lake2, vdev_id 0, ms_id 533, smp_length 720, unflushed_allocs 0, unflushed_frees 0, freed 0, defer 0 + 0, unloaded time 30587 ms, loading_time 34 ms, ms_max_size 8589905920, max size erro
metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 996886, spa lake2, vdev_id 0, ms_id 532, smp_length 1040, unflushed_allocs 0, unflushed_frees 0, freed 0, defer 0 + 0, unloaded time 30576 ms, loading_time 47 ms, ms_max_size 8589828096, max size err
spa_history.c:309:spa_history_log_sync(): txg 996886 import pool version 5000; software version unknown; uts  12.2-RELEASE-p6 1202000 amd64
metaslab.c:2422:metaslab_load_impl(): metaslab_load: txg 996886, spa lake2, vdev_id 0, ms_id 535, smp_length 768, unflushed_allocs 0, unflushed_frees 0, freed 0, defer 0 + 0, unloaded time 30607 ms, loading_time 26 ms, ms_max_size 8589869056, max size erro
Pools import completed
 

Ofloo

Explorer
Joined
Jun 11, 2020
Messages
60
I finished copying all the data but now I ran into an other problem. When I wanted to copy back the data I got error:

cannot send <pool/dataset@snapshot>: encrypted dataset <pool/dataset> may not be send with properites without the raw flag.

So I looked through the forum and I came to find this command string:
zfs send -R -w backup_pool/server-backup/services@migrattion | zfs recv -F -e tank

The issue is now that I have to decrypt both the dataset and the pool...?
 
Joined
Oct 22, 2019
Messages
3,641
Going forwards, you don't even need to use "-x encryption" or check the "encryption" box for Replication Tasks from pool2. If you use the "-w" flag or check "Include Properties", it can send the dataset as a raw stream, "as is". Meaning that the dataset can remain locked, and will be locked at the destination, requiring the same key/passphrase in order to decrypt it.

It's to do with the above. Your natively encrypted datasets can be sent as raw streams (think of it "as is"), meaning the data is not encrypted ---> decrypted ---> re-encrypted, but rather the records/blocks are sent "as is". You needn't unlock before sending, and the receiving side will receive it locked. (As a mental aid, imagine you send as a raw stream an encrypted dataset to a friend's TrueNAS server. They'll receive it "as is", but will be unable to unlock/decrypt it, unless you tell them your passphrase or share your 64-character keystring.)



The issue is now that I have to decrypt both the dataset and the pool...?
The "pool" is not encrypted, as mentioned earlier. When you select the "Encryption" option during pool creation, it encrypts the top-level root dataset. How you proceed from there is up to you. You can leave the default "Inherit encryption" option upon creation of new datasets, or you can uncheck "Inherit encryption" and use a different passphrase/key or even disable encryption for the new child dataset.

If you keep it "simple" and never uncheck "Inherit encryption", it works and behaves as if the "entire" pool is encrypted, and will only require the root dataset to be unlocked. Doing so will automatically unlock/decrypt the inherited datasets underneath, as they share the inherited encryption properties (keystring/passphrase) and their "encryptionroot" is the root dataset.

You can check with this command, though the GUI also implies it if you look at the layout (when there is no icon next to a dataset's name, it's inheriting the encryption properties from its "encryptionroot"):
zfs get -r -t filesystem encryptionroot poolname
 

Ofloo

Explorer
Joined
Jun 11, 2020
Messages
60
I figured it out you can set it to inherit in the webinterface, under encryption options

1618767900127.png


I've used "zfs send -R -w lake2/backups@2021041801 | zfs recv -F -v -d lake"

The second time though, when I read the man page what it exactly means, it doesn't make much sense to me.

They both do the same in a way but one uses the first the second uses the last but what that exactly means, ..
-d Discard the first element of the sent snapshot's file system name,
using the remaining elements to determine the name of the target
file system for the new snapshot as described in the paragraph
above.

-e Discard all but the last element of the sent snapshot's file system
name, using that element to determine the name of the target file
system for the new snapshot as described in the paragraph above.
 
Joined
Oct 22, 2019
Messages
3,641
When using "-d destpool" on the recv side, it slices off the part in red:
sourcepool/media/videos@migrate

and saves it like so:
destpool/media/videos@migrate
 

Ofloo

Explorer
Joined
Jun 11, 2020
Messages
60
And what would the -e do then, .. I mean if that takes the last part of the dest pool on the recv side I would understand that it would sice off "videos" in this case? That's why it's confusing for me because why would anyone want that?
 
Joined
Oct 22, 2019
Messages
3,641
And what would the -e do then, .. I mean if that takes the last part of the dest pool on the recv side I would understand that it would sice off "videos" in this case? That's why it's confusing for me because why would anyone want that?

-e Discard all but the last element

It would slice out in red:
sourcepool/media/videos@migrate
 
Top