Migrating Legacy encrypted pool - potential side effects

moelassus

Dabbler
Joined
May 15, 2018
Messages
34
I have a TrueNAS Core system with a legacy encrypted pool (PrimaryPool) which I would like to change to a non-encrypted pool. I don't have enough capacity to create a new pool to migrate to.

I understand that the only way to do this is to export/destroy the PrimaryPool and recreate it unencrypted. That is of course the easy part.

Question: When destroy my PrimaryPool, what will become of the SMB shares, Replication Tasks & Snapshot tasks associated with that pool? Will TrueNAS just pick up replicating new snapshots? Once the pool is recreated and the data is restored will all of those configurations remain intact? Note: I have no jails or packages installed.

Multiple backups of my data exist both onsite and offsite, so I'm not worried about losing my data just how much reconfiguration this exercise is going to require.

Thank you!
 
Joined
Oct 22, 2019
Messages
3,641
Will TrueNAS just pick up replicating new snapshots?
Your replication tasks will be unable to support an incremental transfer if you start with a new pool. All your snapshots (which were used for each incremental replication) will be gone.

* I would also practice great caution that you don't accidentally destroy your backups with an ill-configured task.

What would be ideal is to do a full filesystem replication from your backup pool to your new primary pool, which should populate your primary pool with the same exact snapshots for every dataset involved. (Pay close attention that you retain the same exact hierarchy. The top-level root dataset throws people off when replicating/restoring from a backup.)


As far SMB shares go, technically as long as the paths and permissions are the same, you should be able to continue using the same shares that you've already configured. (I would stop the SMB service in the meantime, while you're doing your migration from the backup pool to the new primary pool.)


* In fact, while you're at it, you should create a "checkpoint" for your backup pool(s), and then later remove the checkpoint after you confirm the migration went smoothly.

To create a checkpoint:
Code:
zpool checkpoint nameofpool

To view if a checkpoint exists (it will have a "size"):
Code:
zpool get checkpoint nameofpool

To delete a checkpoint:
Code:
zpool checkpoint -d nameofpool
 
Last edited:

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
I understand that the only way to do this is to export/destroy the PrimaryPool and recreate it unencrypted. That is of course the easy part.
Wrong. :smile:

 

moelassus

Dabbler
Joined
May 15, 2018
Messages
34
* In fact, while you're at it, you should create a "checkpoint" for your backup pool(s), and then later remove the checkpoint after you confirm the migration went smoothly.

To create a checkpoint:
Code:
zpool checkpoint nameofpool

To view if a checkpoint exists (it will have a "size"):
Code:
zpool get checkpoint nameofpool

To delete a checkpoint:
Code:
zpool checkpoint -d nameofpool
Thanks for this. What exactly does the checkpoint do?
 

moelassus

Dabbler
Joined
May 15, 2018
Messages
34
Wrong. :smile:

Yeah, I read that thread but this would be more time consuming than just rebuilding the whole thing from scratch! . Thanks for the suggestion though.
 
Joined
Oct 22, 2019
Messages
3,641
Thanks for this. What exactly does the checkpoint do?
It marks a state of your pool that you can "rewind" to when you import it again, just in case something really bad happens. (Yes, even if you destroy snapshots, upgrade features, destroy entire datasets, etc.)

(Good for a "just in case, one-time-use" emergency safeguard.)
 

moelassus

Dabbler
Joined
May 15, 2018
Messages
34
I have successfully destroyed the legacy encrypted pool, recreated a non-encrypted pool of the same name and restored my data.

Two minor issues were encountered. First was both the BackupPool and the newly restored PrimaryPool had iocage active on them. The failed task provided instructions on how to activate iocage on PrimaryPool which resolved that issue. The second issue was the PrimaryPool and datasets were marked as Read Only after restoration. That was my fault for leaving the Read Only flag set in my restore replication job and easy to resolve.

All Shares, permissions and Tasks remain fully intact. Tasks were marked disabled which was actually helpful.

All snapshots were deleted on PrimaryPool which I hated to lose but I still have the replicated snapshots on my backup NAS. I've got enough storage on the backup NAS that I can backup to a different location so I can keep those, now orphaned, snapshots around for a few months.

Thanks so much for the help @winnielinnie
 
Top