How to encrypt an existing raidz (or mirror)

clinta

Cadet
Joined
Dec 18, 2013
Messages
7
Your pool must be able to survive loosing a disk (raidz1, raidz2, raidz3 or mirror), and you must be willing to accept the risk of reducing this redundancy by 1 disk during the conversion.

1. Start with your unencrypted pool:
Code:
[root@freenas1] ~# zpool status
  pool: tank
 state: ONLINE
  scan: none requested
config:

        NAME                                            STATE     READ WRITE CKSUM
        tank                                            ONLINE       0     0     0
          raidz1-0                                      ONLINE       0     0     0
            gptid/57ef2eb6-6857-11e3-8b4f-000c296ed231  ONLINE       0     0     0
            gptid/58664317-6857-11e3-8b4f-000c296ed231  ONLINE       0     0     0
            gptid/58abaa4a-6857-11e3-8b4f-000c296ed231  ONLINE       0     0     0

errors: No known data errors


2. Create your geli encryption key (in /tmp since it's one of the few writable directories):
Code:
[root@freenas1] ~# dd if=/dev/random of=/tmp/geli.key bs=64 count=1
1+0 records in
1+0 records out
64 bytes transferred in 0.000033 secs (1945184 bytes/sec)


3. Copy /tmp/geli.key to your computer, you can use winscp to do this in Windows.

4. Take your fist disk offline
Code:
[root@freenas1] ~# zpool offline tank gptid/57ef2eb6-6857-11e3-8b4f-000c296ed231
[root@freenas1] ~# zpool status
  pool: tank
state: DEGRADED
status: One or more devices has been taken offline by the administrator.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Online the device using 'zpool online' or replace the device with
        'zpool replace'.
  scan: none requested
config:
 
        NAME                                            STATE    READ WRITE CKSUM
        tank                                            DEGRADED    0    0    0
          raidz1-0                                      DEGRADED    0    0    0
            7958761323130265714                        OFFLINE      0    0    0  was /dev/gptid/57ef2eb6-6857-11e3-8b4f-000c296ed231
            gptid/58664317-6857-11e3-8b4f-000c296ed231  ONLINE      0    0    0
            gptid/58abaa4a-6857-11e3-8b4f-000c296ed231  ONLINE      0    0    0
 
errors: No known data errors


5. Encrypt the partition on the disk you just took offline, using your generated key. Choose a strong passphrase. You will do this for each disk, use the same passphrase every time.
Code:
[root@freenas1] ~# geli init -s 4096 -K /tmp/geli.key gptid/57ef2eb6-6857-11e3-8b4f-000c296ed231
Enter new passphrase:
Reenter new passphrase:
 
Metadata backup can be found in /var/backups/gptid_57ef2eb6-6857-11e3-8b4f-000c296ed231.eli and
can be restored with the following command:
 
        # geli restore /var/backups/gptid_57ef2eb6-6857-11e3-8b4f-000c296ed231.eli gptid/57ef2eb6-6857-11e3-8b4f-000c296ed231


6. Attach the newly encrypted disk:
Code:
[root@freenas1] ~# geli attach -k /tmp/geli.key gptid/57ef2eb6-6857-11e3-8b4f-000c296ed231
Enter passphrase:


7. Replace the unencrypted disk in the zpool with the encrypted one. Wait for resilvering to complete:
Code:
[root@freenas1] ~# zpool replace tank gptid/57ef2eb6-6857-11e3-8b4f-000c296ed231 gptid/57ef2eb6-6857-11e3-8b4f-000c296ed231.eli
[root@freenas1] ~# zpool status
  pool: tank
state: ONLINE
  scan: resilvered 356K in 0h0m with 0 errors on Wed Dec 18 18:53:58 2013
config:
 
        NAME                                                STATE    READ WRITE CKSUM
        tank                                                ONLINE      0    0    0
          raidz1-0                                          ONLINE      0    0    0
            gptid/57ef2eb6-6857-11e3-8b4f-000c296ed231.eli  ONLINE      0    0    0
            gptid/58664317-6857-11e3-8b4f-000c296ed231      ONLINE      0    0    0
            gptid/58abaa4a-6857-11e3-8b4f-000c296ed231      ONLINE      0    0    0
 
errors: No known data errors


8. Repeat steps 4-7 for the remaining disks:
Code:
[root@freenas1] ~# zpool offline tank gptid/58664317-6857-11e3-8b4f-000c296ed231
[root@freenas1] ~# geli init -s 4096 -K /tmp/geli.key gptid/58664317-6857-11e3-8b4f-000c296ed231
Enter new passphrase:
Reenter new passphrase:
 
Metadata backup can be found in /var/backups/gptid_58664317-6857-11e3-8b4f-000c296ed231.eli and
can be restored with the following command:
 
        # geli restore /var/backups/gptid_58664317-6857-11e3-8b4f-000c296ed231.eli gptid/58664317-6857-11e3-8b4f-000c296ed231
 
[root@freenas1] ~# geli attach -k /tmp/geli.key gptid/58664317-6857-11e3-8b4f-000c296ed231
Enter passphrase:
[root@freenas1] ~# zpool replace tank gptid/58664317-6857-11e3-8b4f-000c296ed231 gptid/58664317-6857-11e3-8b4f-000c296ed231.eli
[root@freenas1] ~# zpool status
  pool: tank
state: ONLINE
  scan: resilvered 364K in 0h0m with 0 errors on Wed Dec 18 18:56:38 2013
config:
 
        NAME                                                STATE    READ WRITE CKSUM
        tank                                                ONLINE      0    0    0
          raidz1-0                                          ONLINE      0    0    0
            gptid/57ef2eb6-6857-11e3-8b4f-000c296ed231.eli  ONLINE      0    0    0
            gptid/58664317-6857-11e3-8b4f-000c296ed231.eli  ONLINE      0    0    0
            gptid/58abaa4a-6857-11e3-8b4f-000c296ed231      ONLINE      0    0    0
 
errors: No known data errors
[root@freenas1] ~# zpool offline tank gptid/58abaa4a-6857-11e3-8b4f-000c296ed231
[root@freenas1] ~# geli init -s 4096 -K /tmp/geli.key gptid/58abaa4a-6857-11e3-8b4f-000c296ed231
Enter new passphrase:
Reenter new passphrase:
 
Metadata backup can be found in /var/backups/gptid_58abaa4a-6857-11e3-8b4f-000c296ed231.eli and
can be restored with the following command:
 
        # geli restore /var/backups/gptid_58abaa4a-6857-11e3-8b4f-000c296ed231.eli gptid/58abaa4a-6857-11e3-8b4f-000c296ed231
 
[root@freenas1] ~# geli attach -k /tmp/geli.key gptid/58abaa4a-6857-11e3-8b4f-000c296ed231
Enter passphrase:
[root@freenas1] ~# zpool replace tank gptid/58abaa4a-6857-11e3-8b4f-000c296ed231 gptid/58abaa4a-6857-11e3-8b4f-000c296ed231.eli
[root@freenas1] ~# zpool status
  pool: tank
state: ONLINE
  scan: resilvered 344K in 0h0m with 0 errors on Wed Dec 18 18:58:21 2013
config:
 
        NAME                                                STATE    READ WRITE CKSUM
        tank                                                ONLINE      0    0    0
          raidz1-0                                          ONLINE      0    0    0
            gptid/57ef2eb6-6857-11e3-8b4f-000c296ed231.eli  ONLINE      0    0    0
            gptid/58664317-6857-11e3-8b4f-000c296ed231.eli  ONLINE      0    0    0
            gptid/58abaa4a-6857-11e3-8b4f-000c296ed231.eli  ONLINE      0    0    0
 
errors: No known data errors


9. Detach the pool from FreeNAS using the webgui. DO NOT CHECK "Mark the disks as new"!

10. Detach all your geli encrypted disks
Code:
[root@freenas1] ~# geli detach /dev/gptid/*.eli


11. Use Auto-Import to import your encrypted zfs volume into the FreeNAS gui. Choose "Yes: decrypt the disks"

12. Highlight the disks you encrypted. Click browse and upload the geli.key file you backed up to your computer in step 3. Type your passphrase.

13. Select your volume to import and click OK.

Warning, I have only tested this on FreeNAS-9.2.0-RC2 and I have not tested it with production data. Use at your own risk.
 

sully

Explorer
Joined
Aug 23, 2012
Messages
60
Interesting... Anyone else give this a go yet? I'm loathing the fact that I need to dump my current array to an external drives and rebuild an encrypted raid.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Looking through it I don't see why it wouldn't work. But there are always edge cases that seem to bite people. My advice is to use external drives if you have that option. The other option could permanently kill your pool or leave you in a position where you can't mount it.
 

clinta

Cadet
Joined
Dec 18, 2013
Messages
7
It worked fine for me in my testing, but I'd never advise something like this without having backups to restore from if something goes wrong. I'd never advise having storage at all without having backups.

The biggest danger is having a drive fail while your redundancy is reduced.
 

sully

Explorer
Joined
Aug 23, 2012
Messages
60
Looking through it I don't see why it wouldn't work. But there are always edge cases that seem to bite people. My advice is to use external drives if you have that option. The other option could permanently kill your pool or leave you in a position where you can't mount it.



Agreed. I'm running a RAIDZ2 (6 drives) so I think I may give this a go...
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I' absolutely backup your data to be safe!
 

xioustic

Dabbler
Joined
Sep 4, 2014
Messages
23
I'd also like to know if someone has tried this recently.

Once I get a full mirror of the stuff I can't afford to lose on my FreeNAS array, I think I'm going to have a go at it. Would feel a bit better if someone could chime in and let me know if they've done this or believe the steps given will work for FreeNAS 9.3. Not sure if the encryption mechanism has changed under the hood in the past 2 years.
 

Myron

Cadet
Joined
Jan 23, 2016
Messages
5
Greetings!

I am currently encrypting 4 disks in a raidz1 array on my FreeNAS system (FreeNAS-9.3-STABLE-201512121950) running on 4x3TB hard disks. So far a resilver takes somewhere between 16 and 24 hours to complete, depending on the utilization of the box. I am logging my success (or the lack of.. ;-) ) in case someone is interested and will provide an update here.
 

Myron

Cadet
Joined
Jan 23, 2016
Messages
5
Hello again,

i have now encrypted all the disks in the pool and i am able to mount the pool with the proper geli.key and passphrase. However, there are some oddities after rebooting the system:

- After rebooting, the volume is already known to the system and listed as FAULTED

- When trying to use the "Import Volume" dialog in the GUI (Uploading the key and entering the passphrase) the disks are attached via the geli attach command (I can see that in the debug.log)
- The snapshot configuration was gone (Assumption: It was removed when detaching the pool?)
-- This works properly if i have NOT rebooted the FreeNAS system after encrypting all the disks - only after rebooting the system
-- When having rebooted the FreeNAS system, the last dialog (Step 3 of 3) which wants to ask which volume to import, shows an empty list of volumes
-- The pool is available after pressing cancel in step 3. However, it's not mounted and when trying to list the jails, it seems to try to create the jails directory / dataset, which fails, as it's already there
-- This is still the case when i create a new key and passphrase via the GUI for the volume (Encryption Re-key, Change Passphrase, Add recovery key)

When i explicitly detach the pool from the GUI before, importing the volume works and the last steps shows the proper volume, the jails are also available again. However, the snapshot tasks are gone again which is rather unfortunate.. So that seems to be related to detaching the pool from the GUI.


Is that expected behaviour when using the above mentioned steps?
Is there a way to restore the jails configuration as i would like to have them back.

Thanks, Regards,
Andreas
 
Last edited:

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
- The snapshot configuration was gone (Assumption: It was removed when detaching the pool?)
Correct. My experience show that detaching a pool removes all associated items (shares, replication, snapshots, etc).

Sorry, I don't know the encryption stuff.
 

Myron

Cadet
Joined
Jan 23, 2016
Messages
5
Allow me to answer myself:

-- This is still the case when i create a new key and passphrase via the GUI for the volume (Encryption Re-key, Change Passphrase, Add recovery key)

The solution to my issue is the following: Reboot after Re-Keying, Passphrasing and adding a recovery key for the volume. After a reboot the encryption key is saved in /data/geli and you are now able to simply unlock the volume with the passphrase. All services can be restarted afterwards and everything is back to normal.

Regards,
Andreas
 

Revolution

Dabbler
Joined
Sep 8, 2015
Messages
39
I have a little question to this topic:

Successfully transfered my entire pool over to another set of HDD's in a mirror setup. Made a backup of the config before. If I encrypt my Main pool now via GUI, and then reload the config, will all of my settings, shares, plugins survive that?

Greetings
 

xioustic

Dabbler
Joined
Sep 4, 2014
Messages
23
Allow me to answer myself:



The solution to my issue is the following: Reboot after Re-Keying, Passphrasing and adding a recovery key for the volume. After a reboot the encryption key is saved in /data/geli and you are now able to simply unlock the volume with the passphrase. All services can be restarted afterwards and everything is back to normal.

Regards,
Andreas

Andreas,

I actually just finished this process on all 6 of my 4TB+ drives. It took a week or two to properly resilver; I was lucky enough no unplanned reboots happened after I surpassed the 2-drive mark (it's raidz2).

So for the original guide I've finished up to step 8, with this output:
Code:
[root@freenas] ~# zpool status
  pool: freenas-boot
state: ONLINE
  scan: scrub repaired 0 in 0h3m with 0 errors on Thu Jan  7 03:48:33 2016
config:

        NAME        STATE     READ WRITE CKSUM
        freenas-boot  ONLINE       0     0     0
          da0p2     ONLINE       0     0     0

errors: No known data errors

  pool: tank
state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
        still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0 in 10h39m with 0 errors on Mon Feb  1 14:39:33 2016
config:

        NAME                                                STATE     READ WRITE CKSUM
        tank                                                ONLINE       0     0     0
          raidz2-0                                          ONLINE       0     0     0
            gptid/bb673d72-5111-11e4-8b24-d050991b63e9.eli  ONLINE       0     0     0
            gptid/a0be8957-4da6-11e4-8598-d050991b63e9.eli  ONLINE       0     0     0
            gptid/88d0082b-63a9-11e4-b415-d050991b63e9.eli  ONLINE       0     0     0
            gptid/34b1a1c1-9618-11e5-af98-d050991b63e9.eli  ONLINE       0     0     0
            gptid/a625d53a-31a6-11e4-b5b6-d050991b63e9.eli  ONLINE       0     0     0
            gptid/3e6ab1ea-31a6-11e4-b5b6-d050991b63e9.eli  ONLINE       0     0     0

errors: No known data errors


Since you have some modern experience with this process now (including what goes wrong), what is the appropriate process from step #8 forward to keep me from losing my jails, shares, etc and getting everything to play nice on boot? My box is FreeNAS-9.3-STABLE-201601181840.

Thanks!
 

Myron

Cadet
Joined
Jan 23, 2016
Messages
5
I have a little question to this topic:

Successfully transfered my entire pool over to another set of HDD's in a mirror setup. Made a backup of the config before. If I encrypt my Main pool now via GUI, and then reload the config, will all of my settings, shares, plugins survive that?

Greetings

Not sure what exactly you try to do. Assumption: You DETACH the existing pool (main pool which has been transfered over to another set of mirrored HDDs), mark the disks as clean (destroys data) and create a new encrypted pool on the disks just released via detach. You will then lose the DATA and the snapshot configuration. Everything else will stay intact - at least the "obvious" stuff that i have observed while conducting the encryption as described above.
 

Myron

Cadet
Joined
Jan 23, 2016
Messages
5
Since you have some modern experience with this process now (including what goes wrong), what is the appropriate process from step #8 forward to keep me from losing my jails, shares, etc and getting everything to play nice on boot? My box is FreeNAS-9.3-STABLE-201601181840.

You would then continue with steps 9-13 AND as step 14 immediately re-key, re-passphrase and add a recovery key (save all of them at a safe place) and reboot. Afterwards your pool should be shown as locked in the GUI. Unlock it with the passphrase and you're done. If you don't re-key the pool, you will not be able to "simply unlock" the pool from the GUI and would have to detach re-import again (which will cause your snapshot config to be gone).

Regards,
Andreas
 

Revolution

Dabbler
Joined
Sep 8, 2015
Messages
39
Not sure what exactly you try to do. Assumption: You DETACH the existing pool (main pool which has been transfered over to another set of mirrored HDDs), mark the disks as clean (destroys data) and create a new encrypted pool on the disks just released via detach. You will then lose the DATA and the snapshot configuration. Everything else will stay intact - at least the "obvious" stuff that i have observed while conducting the encryption as described above.

I thought you lose the CIFS mounting points, jail positions and stuff..
 

Tharp94895

Cadet
Joined
Jan 5, 2015
Messages
9
Thanks for the post. I was able to encrypt my 12TB pool with no issues. All jails and samba share are intact and are functioning properly. For me took right over 5 days to encrypt all 8 HDD fully. I was able to maximize performance for resilvering using this Post I found via Google. Again thanks for the post!

EDIT
Running 9.10 Stable
 

Zofoor

Patron
Joined
Aug 16, 2016
Messages
219
I have followed this how to, and it worked greatly! Thanks a lot for your post!

Only one note: I had the main system dataset on the same volume of the one that I have encrypted. When I clicked to button to deatach the pool the system crashed becouse the pool had the system dataset. After a few reboots it fixed itself, and I have then successfully deatached the volume and finished the process without troubles.

I think that if I had moved the system dataset before deatach the pool I would not encouter this little problem.

PS: and of course, once the pool is up, remember to move the Sytem Dataset there!!!
 
Last edited:
Top