Move from Legacy Encryption?

Daisuke

Contributor
Joined
Jun 23, 2011
Messages
1,038
How do I safely move from legacy encryption?

Edit: See post #9 for an automated script to remove the encryption on a RaidZ2 setup, credit goes to @Patrick M. Hausen

1603518322471.png
 
Last edited:

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,740
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,740
Create encrypted dataset, copy data, destroy old dataset. Repeat as necessary.

Native encryption is per dataset, not per disk drive.
 

Daisuke

Contributor
Joined
Jun 23, 2011
Messages
1,038
@Patrick M. Hausen I started following the tutorial but I do not see the new named disk?
Code:
root@nas[~]# zpool offline default gptid/47702b65-68bf-11ea-8287-047d7bd5d6a2.eli
root@nas[~]# geli detach gptid/47702b65-68bf-11ea-8287-047d7bd5d6a2.eli
root@nas[~]# zpool status default
  pool: default
state: DEGRADED
status: One or more devices has been taken offline by the administrator.
    Sufficient replicas exist for the pool to continue functioning in a
    degraded state.
action: Online the device using 'zpool online' or replace the device with
    'zpool replace'.
  scan: scrub repaired 0B in 01:18:59 with 0 errors on Sun Oct  4 01:19:01 2020
config:

    NAME                                                STATE     READ WRITE CKSUM
    default                                             DEGRADED     0     0     0
      raidz2-0                                          DEGRADED     0     0     0
        gptid/47702b65-68bf-11ea-8287-047d7bd5d6a2.eli  OFFLINE      0     0     0
        gptid/496e1d69-68bf-11ea-8287-047d7bd5d6a2.eli  ONLINE       0     0     0
        gptid/4f684965-68bf-11ea-8287-047d7bd5d6a2.eli  ONLINE       0     0     0
        gptid/526bcc91-68bf-11ea-8287-047d7bd5d6a2.eli  ONLINE       0     0     0
        gptid/5380dfbd-68bf-11ea-8287-047d7bd5d6a2.eli  ONLINE       0     0     0
        gptid/4f7e5bea-68bf-11ea-8287-047d7bd5d6a2.eli  ONLINE       0     0     0
        gptid/435c778b-68bf-11ea-8287-047d7bd5d6a2.eli  ONLINE       0     0     0
        gptid/5b43ebbf-68bf-11ea-8287-047d7bd5d6a2.eli  ONLINE       0     0     0
        gptid/5b2e7414-68bf-11ea-8287-047d7bd5d6a2.eli  ONLINE       0     0     0
        gptid/592e5f92-68bf-11ea-8287-047d7bd5d6a2.eli  ONLINE       0     0     0
        gptid/5a2e8157-68bf-11ea-8287-047d7bd5d6a2.eli  ONLINE       0     0     0
        gptid/56c5b521-68bf-11ea-8287-047d7bd5d6a2.eli  ONLINE       0     0     0

errors: No known data errors

If I force a detach after, the device is not there anymore:
Code:
root@nas[~]# geli detach -f gptid/47702b65-68bf-11ea-8287-047d7bd5d6a2.eli
geli: No such device: gptid/47702b65-68bf-11ea-8287-047d7bd5d6a2.eli.

So I presume the naming convention changed? I was expecting something like:
Code:
5939868321408276145 OFFLINE 0 0 0 was /dev/gptid/47702b65-68bf-11ea-8287-047d7bd5d6a2.eli

In other words, I run:
Code:
# zpool replace default gptid/47702b65-68bf-11ea-8287-047d7bd5d6a2.eli \
    gptid/47702b65-68bf-11ea-8287-047d7bd5d6a2
 
Last edited:

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,740
In your first three statements everything went as documented. Then you should have:
zpool replace default gptid/47702b65-68bf-11ea-8287-047d7bd5d6a2.eli gptid/47702b65-68bf-11ea-8287-047d7bd5d6a2

Note first device with encryption, which you successfully offlined, second device the new one without encryption - same gptid, but withtout the ".eli".

I would have expected a 5939868321408276145 OFFLINE ... was ... just as you did, but possibly that's a change with the new OpenZFS.
Anyway we replace the device IN the pool on the left of the zpool status output with the NEW device without encryption. Names don't mean a thing ;)
 
Last edited:

Daisuke

Contributor
Joined
Jun 23, 2011
Messages
1,038
@Patrick M. Hausen , yes like you mentioned. Look at the resilvering time, that's what I meant by a long process. :)
Code:
# zpool status default
  pool: default
state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
    continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Sat Oct 24 13:28:51 2020
    7.81T scanned at 25.1G/s, 237G issued at 762M/s, 7.81T total
    18.9G resilvered, 2.97% done, 02:53:50 to go
config:

    NAME                                                  STATE     READ WRITE CKSUM
    default                                               DEGRADED     0     0     0
      raidz2-0                                            DEGRADED     0     0     0
        replacing-0                                       DEGRADED     0     0     0
          gptid/47702b65-68bf-11ea-8287-047d7bd5d6a2.eli  OFFLINE      0     0     0
          gptid/47702b65-68bf-11ea-8287-047d7bd5d6a2      ONLINE       0     0     0
        gptid/496e1d69-68bf-11ea-8287-047d7bd5d6a2.eli    ONLINE       0     0     0
        gptid/4f684965-68bf-11ea-8287-047d7bd5d6a2.eli    ONLINE       0     0     0
        gptid/526bcc91-68bf-11ea-8287-047d7bd5d6a2.eli    ONLINE       0     0     0
        gptid/5380dfbd-68bf-11ea-8287-047d7bd5d6a2.eli    ONLINE       0     0     0
        gptid/4f7e5bea-68bf-11ea-8287-047d7bd5d6a2.eli    ONLINE       0     0     0
        gptid/435c778b-68bf-11ea-8287-047d7bd5d6a2.eli    ONLINE       0     0     0
        gptid/5b43ebbf-68bf-11ea-8287-047d7bd5d6a2.eli    ONLINE       0     0     0
        gptid/5b2e7414-68bf-11ea-8287-047d7bd5d6a2.eli    ONLINE       0     0     0
        gptid/592e5f92-68bf-11ea-8287-047d7bd5d6a2.eli    ONLINE       0     0     0
        gptid/5a2e8157-68bf-11ea-8287-047d7bd5d6a2.eli    ONLINE       0     0     0
        gptid/56c5b521-68bf-11ea-8287-047d7bd5d6a2.eli    ONLINE       0     0     0

errors: No known data errors
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,740
3 hours per disk? Come on ... :wink:

Don't forget to wipe the encryption entries from your config database, then export and re-import the pool just to be on the safe side. Most ZFS manipulations can be done on the command line, because TrueNAS does not do extra book keeping but just uses the data in the pool/filesystem. With the exception of e.g. "hey, this drive is encrypted" ...
Don't know about native encryption, yet - if there's a difference in CLI vs. UI. I would create the new encrypted datasets in the UI, copy data, then after double-checking do a quick zfs destroy in the CLI. I prefer CLI whenever possible, to do things, yet like a UI to look at the state of things.
 

Daisuke

Contributor
Joined
Jun 23, 2011
Messages
1,038
@Patrick M. Hausen I'm working on my 5th disk, I have 12 total.

I made a quick script to avoid human error, so I pass the actual pool and device uuid:
Code:
# cat geli.sh
#!/bin/bash

args[10]='47702b65-68bf-11ea-8287-047d7bd5d6a2'
args[11]='496e1d69-68bf-11ea-8287-047d7bd5d6a2'
args[12]='4f684965-68bf-11ea-8287-047d7bd5d6a2'
args[13]='526bcc91-68bf-11ea-8287-047d7bd5d6a2'
args[14]='5380dfbd-68bf-11ea-8287-047d7bd5d6a2'
args[15]='4f7e5bea-68bf-11ea-8287-047d7bd5d6a2'
args[16]='435c778b-68bf-11ea-8287-047d7bd5d6a2'
args[17]='5b43ebbf-68bf-11ea-8287-047d7bd5d6a2'
args[18]='5b2e7414-68bf-11ea-8287-047d7bd5d6a2'
args[19]='592e5f92-68bf-11ea-8287-047d7bd5d6a2'
args[20]='5a2e8157-68bf-11ea-8287-047d7bd5d6a2'
args[21]='56c5b521-68bf-11ea-8287-047d7bd5d6a2'
pool='default'

for uuid in "${args[@]}"; do
    if [ -n "$(zpool status $pool | grep FAULTED)" ]; then
        echo 'Faulty device found.'
        break
    fi

    echo -n 'Taking disk offline... '
    zpool offline $pool gptid/$uuid.eli
    [ $? -eq 0 ] && echo 'OK' || break

    echo -n 'Removing geli encryption... '
    geli detach gptid/$uuid.eli
    [ $? -eq 0 ] && echo 'OK' || break

    echo -n 'Replacing disk... '
    zpool replace $pool gptid/$uuid.eli gptid/$uuid
    [ $? -eq 0 ] && echo 'OK' || break

    echo -n 'Deleting disk from database... '
    sqlite3 /data/freenas-v1.db "delete from storage_encrypteddisk where encrypted_provider = 'gptid/$uuid';"
    [ $? -eq 0 ] && echo 'OK' || break

    echo -n "Resilvering $uuid "
    while [ "$(zpool list -Ho health $pool)" == "DEGRADED" ]; do
        echo -n '.'
        sleep 60
    done
    echo ' OK'
done


I will run it with screen over night:
Code:
root@nas[~]# screen
root@nas[~]
# bash geli.sh
Taking disk offline... OK
Removing geli encryption... OK
Replacing disk... OK
Deleting disk from database... OK
Resilvering 435c778b-68bf-11ea-8287-047d7bd5d6a2 .................................................................................................................. OK
Taking disk offline... OK
Removing geli encryption... OK
Replacing disk... OK
Deleting disk from database... OK
Resilvering 5b43ebbf-68bf-11ea-8287-047d7bd5d6a2 ..................................................

The dots actually help you tell how long it actually took to resolver a disk, in the example above 115 minutes for 435c778b-68bf-11ea-8287-047d7bd5d6a2. Once all disks are resilvered, I'll disable all related services and disconnect the pool through GUI, unless there is a CLI command for this?
 
Last edited:

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,740
zpool export default

The important point is - e.g. after this procedure or after exporting and re-importing to rename a pool - to do the final import from the UI. Because FreeNAS does some magic for the pools to end up under /mnt which importing from CLI doesn't. I think something with the altroot option, but I have come to just use the UI and not care how precisely it does that.
 

orjan-

Dabbler
Joined
Apr 17, 2018
Messages
20
If the pool is raidz2 or raidz3, is it possible to resilver 2/3 disks at the same time?
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,740
I would not dare doing that with a RAIDZ2 and I am not sure if that would speed up things at all. The point of RAIDZ(n>1) is to have redundancy even while replacing a single disk.

Technically, if you want to play fast and loose - sure, why not? :wink:
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,906
Just out of curiosity: What is the rational for moving away from legacy encryption right now? Or in other words: I would have waited at least a few weeks given the "freshness" of the 12.0 release.

Thanks!
 

Daisuke

Contributor
Joined
Jun 23, 2011
Messages
1,038
@Patrick M. Hausen I just had a huge power spike, the APC protected the server and shut it down. After reboot, I get this:
Code:
# zpool status default
  pool: default
state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
    continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Sun Oct 25 12:05:28 2020
    5.27T scanned at 44.6G/s, 1.26G issued at 10.7M/s, 7.81T total
    0B resilvered, 0.02% done, 8 days 20:49:20 to go
config:

    NAME                                                  STATE     READ WRITE CKSUM
    default                                               DEGRADED     0     0     0
      raidz2-0                                            DEGRADED     0     0     0
        gptid/47702b65-68bf-11ea-8287-047d7bd5d6a2        ONLINE       0     0     0
        gptid/496e1d69-68bf-11ea-8287-047d7bd5d6a2        ONLINE       0     0     0
        gptid/4f684965-68bf-11ea-8287-047d7bd5d6a2        ONLINE       0     0     0
        gptid/526bcc91-68bf-11ea-8287-047d7bd5d6a2        ONLINE       0     0     0
        gptid/5380dfbd-68bf-11ea-8287-047d7bd5d6a2        ONLINE       0     0     0
        gptid/4f7e5bea-68bf-11ea-8287-047d7bd5d6a2        ONLINE       0     0     0
        gptid/435c778b-68bf-11ea-8287-047d7bd5d6a2        ONLINE       0     0     0
        replacing-7                                       UNAVAIL      0     0     0  insufficient replicas
          gptid/5b43ebbf-68bf-11ea-8287-047d7bd5d6a2.eli  OFFLINE      0     0     0
          805706285987834672                              UNAVAIL      0     0     0  was /dev/gptid/5b43ebbf-68bf-11ea-8287-047d7bd5d6a2
        gptid/5b2e7414-68bf-11ea-8287-047d7bd5d6a2.eli    ONLINE       0     0     0
        gptid/592e5f92-68bf-11ea-8287-047d7bd5d6a2.eli    ONLINE       0     0     0
        gptid/5a2e8157-68bf-11ea-8287-047d7bd5d6a2.eli    ONLINE       0     0     0
        gptid/56c5b521-68bf-11ea-8287-047d7bd5d6a2.eli    ONLINE       0     0     0

errors: No known data errors

How do I fix the issue?
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,740
So while in the process of replacing gptid/5b43ebbf-68bf-11ea-8287-047d7bd5d6a2.eli with gptid/5b43ebbf-68bf-11ea-8287-047d7bd5d6a2 you had that power surge and shutdown, obviously.

I would

1. check if the disk is still there and the partition gptid/5b43ebbf-68bf-11ea-8287-047d7bd5d6a2 still available
2. zpool online default gptid/5b43ebbf-68bf-11ea-8287-047d7bd5d6a2
3. zpool replace default 805706285987834672 gptid/5b43ebbf-68bf-11ea-8287-047d7bd5d6a2
4. zpool replace -f default 805706285987834672 gptid/5b43ebbf-68bf-11ea-8287-047d7bd5d6a2
5. delete partition from disk, create new one (will get new UUID)
6. zpool replace default 805706285987834672 gptid/<new UUID>

in order of increasingly severe measures.

If it complains the disk is already a member pool "default" and stubbornly refuses to online or replace it, use zpool labelclear to wipe any ZFS information from that partition, delete partitions, destroy partition table, create new table and partition, go to 6.
 

Daisuke

Contributor
Joined
Jun 23, 2011
Messages
1,038
@Patrick M. Hausen thank you for the reply. I'm not really familiar with ZFS, so this is what I did so far:
Code:
root@nas[~]# zpool online default gptid/5b43ebbf-68bf-11ea-8287-047d7bd5d6a2
warning: device 'gptid/5b43ebbf-68bf-11ea-8287-047d7bd5d6a2' onlined, but remains in faulted state
use 'zpool replace' to replace devices that are no longer present
root@nas[~]# zpool replace default 805706285987834672 gptid/5b43ebbf-68bf-11ea-8287-047d7bd5d6a2
invalid vdev specification
use '-f' to override the following errors:
/dev/gptid/5b43ebbf-68bf-11ea-8287-047d7bd5d6a2 is part of active pool 'default'
root@nas[~]# zpool replace -f default 805706285987834672 gptid/5b43ebbf-68bf-11ea-8287-047d7bd5d6a2
invalid vdev specification
the following errors must be manually repaired:
/dev/gptid/5b43ebbf-68bf-11ea-8287-047d7bd5d6a2 is part of active pool 'default'
root@nas[~]# zpool status default
  pool: default
state: DEGRADED
  scan: resilvered 0B in 02:38:52 with 0 errors on Sun Oct 25 14:44:20 2020
config:

    NAME                                                  STATE     READ WRITE CKSUM
    default                                               DEGRADED     0     0     0
      raidz2-0                                            DEGRADED     0     0     0
        gptid/47702b65-68bf-11ea-8287-047d7bd5d6a2        ONLINE       0     0     0
        gptid/496e1d69-68bf-11ea-8287-047d7bd5d6a2        ONLINE       0     0     0
        gptid/4f684965-68bf-11ea-8287-047d7bd5d6a2        ONLINE       0     0     0
        gptid/526bcc91-68bf-11ea-8287-047d7bd5d6a2        ONLINE       0     0     0
        gptid/5380dfbd-68bf-11ea-8287-047d7bd5d6a2        ONLINE       0     0     0
        gptid/4f7e5bea-68bf-11ea-8287-047d7bd5d6a2        ONLINE       0     0     0
        gptid/435c778b-68bf-11ea-8287-047d7bd5d6a2        ONLINE       0     0     0
        replacing-7                                       UNAVAIL      0     0     0  insufficient replicas
          gptid/5b43ebbf-68bf-11ea-8287-047d7bd5d6a2.eli  OFFLINE      0     0     0
          805706285987834672                              UNAVAIL      0     0     0  was /dev/gptid/5b43ebbf-68bf-11ea-8287-047d7bd5d6a2
        gptid/5b2e7414-68bf-11ea-8287-047d7bd5d6a2.eli    ONLINE       0     0     0
        gptid/592e5f92-68bf-11ea-8287-047d7bd5d6a2.eli    ONLINE       0     0     0
        gptid/5a2e8157-68bf-11ea-8287-047d7bd5d6a2.eli    ONLINE       0     0     0
        gptid/56c5b521-68bf-11ea-8287-047d7bd5d6a2.eli    ONLINE       0     0     0

Can you please let me know what do I need to do next? I don't know how to delete partition from disk and create new one or what partition zfs was created to use for zpool labelclear.
 
Last edited:

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,740
Please find with gpart list and then searching in your terminal window which disk this partition gptid/5b43ebbf-68bf-11ea-8287-047d7bd5d6a2 is on. You still have one redundant drive so we should be good to go. I am looking for something like da6p2 or ada6p2 ...
 

Daisuke

Contributor
Joined
Jun 23, 2011
Messages
1,038
@Patrick M. Hausen I found it:
Code:
2. Name: da10p2
   Mediasize: 1998251364352 (1.8T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e2
   efimedia: HD(2,GPT,5b43ebbf-68bf-11ea-8287-047d7bd5d6a2,0x400080,0xe8a08808)
   rawuuid: 5b43ebbf-68bf-11ea-8287-047d7bd5d6a2
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 1998251364352
   offset: 2147549184
   type: freebsd-zfs
   index: 2
   end: 3907029127
   start: 4194432


So I presume I will run zpool labelclear -f /dev/da10p2 ?
 
Top