How to remove encryption from a ZFS volume (while keeping the data)

ovizii

Patron
Joined
Jun 30, 2014
Messages
435
I'm on 11.1 and need to decrypt an entire pool. Can anyone confirm that this procedure still the most uptodate?
 

sotiris.bos

Explorer
Joined
Jun 12, 2018
Messages
56
I'm on 11.1 and need to decrypt an entire pool. Can anyone confirm that this procedure still the most uptodate?

Did you proceed with removing the encryption on your drives?

I am on 11.2-U6 and want to do the same.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,740
Looks like the structure in the sqlite database is still the same:
Code:
root@freenas-pmh[~]# /usr/local/bin/sqlite3 /data/freenas-v1.db ".schema storage_encrypteddisk"
CREATE TABLE IF NOT EXISTS "storage_encrypteddisk" ("id" integer PRIMARY KEY, "encrypted_volume_id" integer NOT NULL, "encrypted_disk_id" varchar(100) NULL, "encrypted_provider" varchar(120) NOT NULL UNIQUE);

So yes, it should work.

Proceed with caution ;)
Patrick
 

sotiris.bos

Explorer
Joined
Jun 12, 2018
Messages
56
Looks like the structure in the sqlite database is still the same:
Code:
root@freenas-pmh[~]# /usr/local/bin/sqlite3 /data/freenas-v1.db ".schema storage_encrypteddisk"
CREATE TABLE IF NOT EXISTS "storage_encrypteddisk" ("id" integer PRIMARY KEY, "encrypted_volume_id" integer NOT NULL, "encrypted_disk_id" varchar(100) NULL, "encrypted_provider" varchar(120) NOT NULL UNIQUE);

So yes, it should work.

Proceed with caution ;)
Patrick

Thank you so much for your contribution Patrick!

No I did not. I postponed until Xmas time as I'll have plenty of time to do this properly without any time pressure.

Thank you for your reply! I am in the process of backing up critical datasets on my encrypted pool to a second one and once that is done, I will start the procedure of removing the encryption, probably some time today. I will report back with my results!




Edit: Everything went perfectly! (11.2-U6) The only thing I did differently than the guide was to remove the sqlite database entries at the end, rather than after replacing each disk. It took me 2 days to resilver all four 10TB drives. My setup is a stripe of two mirrors, each mirror consisting of two 10TB drives, at 62% capacity. I started with disk1 on mirror1, after completing I started disk1 on mirror2 and after that completed I did both disk2 mirror1 and disk2 mirror2 almost at the same time to save some time. I could have gotten it done in one day instead (or around 30 hours) but decided to not stress the drives that much.

Again, thank you @ovizii !
 
Last edited:

ovizii

Patron
Joined
Jun 30, 2014
Messages
435
Just followed the same procedure very successfully on 11.1-U7 if I' not mistaken.
 

asaayo

Dabbler
Joined
Jan 7, 2021
Messages
13
I don't know if it's just me, but this didn't work for me on 12.0/TrueNAS

I made it to the point where I removed encryption from the first disk (geli detach gptid/537d9fec-5f13-11ea-beeb-d43d7e3724e9.eli) but it never gave me a new ID for the disk (here's the zpool status for that pool)


Code:
  pool: Storage2
 state: DEGRADED
status: One or more devices has been taken offline by the administrator.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Online the device using 'zpool online' or replace the device with
        'zpool replace'.
  scan: scrub repaired 0B in 04:22:56 with 0 errors on Sun Dec 20 04:22:56 2020
config:

        NAME                                                STATE     READ WRITE CKSUM
        Storage2                                            DEGRADED     0     0     0
          raidz1-0                                          DEGRADED     0     0     0
            gptid/537d9fec-5f13-11ea-beeb-d43d7e3724e9.eli  OFFLINE      0     0     0
            gptid/544b72c5-5f13-11ea-beeb-d43d7e3724e9.eli  ONLINE       0     0     0
            gptid/551dead0-5f13-11ea-beeb-d43d7e3724e9.eli  ONLINE       0     0     0

errors: No known data errors
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,740
Output of gpart list, please.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702
I made it to the point where I removed encryption from the first disk (geli detach gptid/537d9fec-5f13-11ea-beeb-d43d7e3724e9.eli) but it never gave me a new ID for the disk (here's the zpool status for that pool)
I see the difference you mean, but you should just be able to use the gptid (not specifically tested by me, so check that first before using on a pool with real data you care about) instead of the ID shown in the original process.

so like:
zpool replace Storage2 gptid/537d9fec-5f13-11ea-beeb-d43d7e3724e9.eli gptid/some-other-gptid-from-your-partition

I note that preparation of the partition scheme isn't in the instructions, but I assume if you're tinkering with this stuff you're OK to manage that.

EDIT: I see that Patrick has started to guide you on that.
 

asaayo

Dabbler
Joined
Jan 7, 2021
Messages
13
To save space, I ran it on the 3 disks that are part of the pool instead of all disks. I'll include the entire gpart in a hidden section just in case. I believe the disk that's currently down is da4

Code:
root@freenas[~]# gpart list da3
Geom name: da3
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 7814037127
first: 40
entries: 128
scheme: GPT
Providers:
1. Name: da3p1
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   efimedia: HD(1,GPT,543a65ff-5f13-11ea-beeb-d43d7e3724e9,0x80,0x400000)
   rawuuid: 543a65ff-5f13-11ea-beeb-d43d7e3724e9
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2147483648
   offset: 65536
   type: freebsd-swap
   index: 1
   end: 4194431
   start: 128
2. Name: da3p2
   Mediasize: 3998639460352 (3.6T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e2
   efimedia: HD(2,GPT,544b72c5-5f13-11ea-beeb-d43d7e3724e9,0x400080,0x1d180be08)
   rawuuid: 544b72c5-5f13-11ea-beeb-d43d7e3724e9
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 3998639460352
   offset: 2147549184
   type: freebsd-zfs
   index: 2
   end: 7814037127
   start: 4194432
Consumers:
1. Name: da3
   Mediasize: 4000787030016 (3.6T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e3

root@freenas[~]# gpart list da4
Geom name: da4
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 7814037127
first: 40
entries: 128
scheme: GPT
Providers:
1. Name: da4p1
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   efimedia: HD(1,GPT,536c3218-5f13-11ea-beeb-d43d7e3724e9,0x80,0x400000)
   rawuuid: 536c3218-5f13-11ea-beeb-d43d7e3724e9
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2147483648
   offset: 65536
   type: freebsd-swap
   index: 1
   end: 4194431
   start: 128
2. Name: da4p2
   Mediasize: 3998639460352 (3.6T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   efimedia: HD(2,GPT,537d9fec-5f13-11ea-beeb-d43d7e3724e9,0x400080,0x1d180be08)
   rawuuid: 537d9fec-5f13-11ea-beeb-d43d7e3724e9
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 3998639460352
   offset: 2147549184
   type: freebsd-zfs
   index: 2
   end: 7814037127
   start: 4194432
Consumers:
1. Name: da4
   Mediasize: 4000787030016 (3.6T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0

root@freenas[~]# gpart list da5
Geom name: da5
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 7814037127
first: 40
entries: 128
scheme: GPT
Providers:
1. Name: da5p1
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   efimedia: HD(1,GPT,550c025b-5f13-11ea-beeb-d43d7e3724e9,0x80,0x400000)
   rawuuid: 550c025b-5f13-11ea-beeb-d43d7e3724e9
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2147483648
   offset: 65536
   type: freebsd-swap
   index: 1
   end: 4194431
   start: 128
2. Name: da5p2
   Mediasize: 3998639460352 (3.6T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e2
   efimedia: HD(2,GPT,551dead0-5f13-11ea-beeb-d43d7e3724e9,0x400080,0x1d180be08)
   rawuuid: 551dead0-5f13-11ea-beeb-d43d7e3724e9
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 3998639460352
   offset: 2147549184
   type: freebsd-zfs
   index: 2
   end: 7814037127
   start: 4194432
Consumers:
1. Name: da5
   Mediasize: 4000787030016 (3.6T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e3


Code:
Geom name: ada0
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 234441607
first: 40
entries: 128
scheme: GPT
Providers:
1. Name: ada0p1
   Mediasize: 524288 (512K)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 20480
   Mode: r0w0e0
   efimedia: HD(1,GPT,6e0865cd-87be-11e8-b421-d43d7e3724e9,0x28,0x400)
   rawuuid: 6e0865cd-87be-11e8-b421-d43d7e3724e9
   rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f
   label: (null)
   length: 524288
   offset: 20480
   type: freebsd-boot
   index: 1
   end: 1063
   start: 40
2. Name: ada0p2
   Mediasize: 120033558528 (112G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 544768
   Mode: r1w1e1
   efimedia: HD(2,GPT,6e09a49e-87be-11e8-b421-d43d7e3724e9,0x428,0xdf94760)
   rawuuid: 6e09a49e-87be-11e8-b421-d43d7e3724e9
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 120033558528
   offset: 544768
   type: freebsd-zfs
   index: 2
   end: 234441607
   start: 1064
Consumers:
1. Name: ada0
   Mediasize: 120034123776 (112G)
   Sectorsize: 512
   Mode: r1w1e2

Geom name: da0
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 5860533134
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: da0p1
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   efimedia: HD(1,GPT,dba85989-7fa7-11e2-ba31-d43d7e3724e9,0x80,0x400000)
   rawuuid: dba85989-7fa7-11e2-ba31-d43d7e3724e9
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2147483648
   offset: 65536
   type: freebsd-swap
   index: 1
   end: 4194431
   start: 128
2. Name: da0p2
   Mediasize: 2998445412352 (2.7T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e2
   efimedia: HD(2,GPT,dbb4efdb-7fa7-11e2-ba31-d43d7e3724e9,0x400080,0x15d10a308)
   rawuuid: dbb4efdb-7fa7-11e2-ba31-d43d7e3724e9
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2998445412352
   offset: 2147549184
   type: freebsd-zfs
   index: 2
   end: 5860533127
   start: 4194432
Consumers:
1. Name: da0
   Mediasize: 3000592982016 (2.7T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e3

Geom name: da1
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 5860533134
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: da1p1
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   efimedia: HD(1,GPT,dbfdcd6a-7fa7-11e2-ba31-d43d7e3724e9,0x80,0x400000)
   rawuuid: dbfdcd6a-7fa7-11e2-ba31-d43d7e3724e9
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2147483648
   offset: 65536
   type: freebsd-swap
   index: 1
   end: 4194431
   start: 128
2. Name: da1p2
   Mediasize: 2998445412352 (2.7T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e2
   efimedia: HD(2,GPT,dc0a50bb-7fa7-11e2-ba31-d43d7e3724e9,0x400080,0x15d10a308)
   rawuuid: dc0a50bb-7fa7-11e2-ba31-d43d7e3724e9
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2998445412352
   offset: 2147549184
   type: freebsd-zfs
   index: 2
   end: 5860533127
   start: 4194432
Consumers:
1. Name: da1
   Mediasize: 3000592982016 (2.7T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e3

Geom name: da2
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 5860533134
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: da2p1
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   efimedia: HD(1,GPT,db534bed-7fa7-11e2-ba31-d43d7e3724e9,0x80,0x400000)
   rawuuid: db534bed-7fa7-11e2-ba31-d43d7e3724e9
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2147483648
   offset: 65536
   type: freebsd-swap
   index: 1
   end: 4194431
   start: 128
2. Name: da2p2
   Mediasize: 2998445412352 (2.7T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e2
   efimedia: HD(2,GPT,db5fbd23-7fa7-11e2-ba31-d43d7e3724e9,0x400080,0x15d10a308)
   rawuuid: db5fbd23-7fa7-11e2-ba31-d43d7e3724e9
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2998445412352
   offset: 2147549184
   type: freebsd-zfs
   index: 2
   end: 5860533127
   start: 4194432
Consumers:
1. Name: da2
   Mediasize: 3000592982016 (2.7T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e3

Geom name: da3
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 7814037127
first: 40
entries: 128
scheme: GPT
Providers:
1. Name: da3p1
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   efimedia: HD(1,GPT,543a65ff-5f13-11ea-beeb-d43d7e3724e9,0x80,0x400000)
   rawuuid: 543a65ff-5f13-11ea-beeb-d43d7e3724e9
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2147483648
   offset: 65536
   type: freebsd-swap
   index: 1
   end: 4194431
   start: 128
2. Name: da3p2
   Mediasize: 3998639460352 (3.6T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e2
   efimedia: HD(2,GPT,544b72c5-5f13-11ea-beeb-d43d7e3724e9,0x400080,0x1d180be08)
   rawuuid: 544b72c5-5f13-11ea-beeb-d43d7e3724e9
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 3998639460352
   offset: 2147549184
   type: freebsd-zfs
   index: 2
   end: 7814037127
   start: 4194432
Consumers:
1. Name: da3
   Mediasize: 4000787030016 (3.6T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e3

Geom name: da4
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 7814037127
first: 40
entries: 128
scheme: GPT
Providers:
1. Name: da4p1
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   efimedia: HD(1,GPT,536c3218-5f13-11ea-beeb-d43d7e3724e9,0x80,0x400000)
   rawuuid: 536c3218-5f13-11ea-beeb-d43d7e3724e9
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2147483648
   offset: 65536
   type: freebsd-swap
   index: 1
   end: 4194431
   start: 128
2. Name: da4p2
   Mediasize: 3998639460352 (3.6T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   efimedia: HD(2,GPT,537d9fec-5f13-11ea-beeb-d43d7e3724e9,0x400080,0x1d180be08)
   rawuuid: 537d9fec-5f13-11ea-beeb-d43d7e3724e9
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 3998639460352
   offset: 2147549184
   type: freebsd-zfs
   index: 2
   end: 7814037127
   start: 4194432
Consumers:
1. Name: da4
   Mediasize: 4000787030016 (3.6T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0

Geom name: da5
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 7814037127
first: 40
entries: 128
scheme: GPT
Providers:
1. Name: da5p1
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   efimedia: HD(1,GPT,550c025b-5f13-11ea-beeb-d43d7e3724e9,0x80,0x400000)
   rawuuid: 550c025b-5f13-11ea-beeb-d43d7e3724e9
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2147483648
   offset: 65536
   type: freebsd-swap
   index: 1
   end: 4194431
   start: 128
2. Name: da5p2
   Mediasize: 3998639460352 (3.6T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e2
   efimedia: HD(2,GPT,551dead0-5f13-11ea-beeb-d43d7e3724e9,0x400080,0x1d180be08)
   rawuuid: 551dead0-5f13-11ea-beeb-d43d7e3724e9
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 3998639460352
   offset: 2147549184
   type: freebsd-zfs
   index: 2
   end: 7814037127
   start: 4194432
Consumers:
1. Name: da5
   Mediasize: 4000787030016 (3.6T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e3
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,740
So your current pool looks like this:
Code:
gptid/537d9fec-5f13-11ea-beeb-d43d7e3724e9.eli  OFFLINE      0     0     0
gptid/544b72c5-5f13-11ea-beeb-d43d7e3724e9.eli  ONLINE       0     0     0
gptid/551dead0-5f13-11ea-beeb-d43d7e3724e9.eli  ONLINE       0     0     0


The two disks that are still online are da3 and ad5 - you can find their rawuuid values in your output. The disk that is offline is da4 - as easily found.
So why don't you just do what the my procedure says?

zpool replace Storage2 gptid/537d9fec-5f13-11ea-beeb-d43d7e3724e9.eli gptid/537d9fec-5f13-11ea-beeb-d43d7e3724e9

First ID with .eli attached, second without ...

Did you try that and got an error?
 

asaayo

Dabbler
Joined
Jan 7, 2021
Messages
13
Alright, this was totally my fault. I misunderstood that step and thought you needed that numeric ID once you do the geli detach in order to reattach. That reattach worked perfectly and the resilver has started. Thanks very much Patrick, I owe you a beer/beverage of your choice.

Code:
root@freenas[/mnt/Storage2/iocage/jails/plex/root]# zpool status Storage2
  pool: Storage2
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Thu Jan  7 13:04:34 2021
        2.52T scanned at 7.76G/s, 69.2G issued at 213M/s, 4.80T total
        23.1G resilvered, 1.41% done, 06:27:17 to go
config:

        NAME                                                  STATE     READ WRITE CKSUM
        Storage2                                              DEGRADED     0     0     0
          raidz1-0                                            DEGRADED     0     0     0
            replacing-0                                       DEGRADED     0     0     0
              gptid/537d9fec-5f13-11ea-beeb-d43d7e3724e9.eli  OFFLINE      0     0     0
              gptid/537d9fec-5f13-11ea-beeb-d43d7e3724e9      ONLINE       0     0     0  (resilvering)
            gptid/544b72c5-5f13-11ea-beeb-d43d7e3724e9.eli    ONLINE       0     0     0
            gptid/551dead0-5f13-11ea-beeb-d43d7e3724e9.eli    ONLINE       0     0     0

errors: No known data errors
 

yonkoc

Explorer
Joined
Oct 26, 2011
Messages
52
Just wanted to thank Patrick for the amazing write-up. Confirmed still working on FreeNAS-11.3-U5. One addition, during step 9 I chose to NOT delete the drives and to NOT delete the config for the shares, etc. I unchecked both options and just checked Confirm Export/disconnect. My z3 is made up for 11 2TB drives so... pretty nerve-wracking 11 resilvers over the last 7 days. But all went smooth as butter. What helped is some work in advance as I knew I'd be going through those steps at odd hours and might be tired. I prepped 11 sets of 4-set command lines in a txt file and labeled all of them, tripple-checked that I copied correctly each of the .eli and non-.eli guids. All that was left was once I offline a drive to see what its new numeric ID is and paste it into the respective command. From that point on it was just a copy/paste. Ready to move to TrueNas now.

So, once again, thank you, thank you, thank you, Patrick!!!
 

Muddro

Explorer
Joined
Oct 6, 2014
Messages
59
Doing this now and seems to be working great, just waiting for resilvering.

Have a quick question on perhaps making resilver faster. I have an encrypted pool with 2 raidz2 vdevs of 4 drives each. They are not mirrored vdevs. Is it possible to do these commands on a drive from each vdev simultaneously so it's resilvering one drove from each vdev at the same time?a
 
Last edited:

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,740
Yes, of course. One drive from each RAIDZ2 is still reasonably safe and given there are no other bottlenecks you can speed up the process that way. Don't do two disks from one vdev :wink:
 

Muddro

Explorer
Joined
Oct 6, 2014
Messages
59
Aw
Yes, of course. One drive from each RAIDZ2 is still reasonably safe and given there are no other bottlenecks you can speed up the process that way. Don't do two disks from one vdev :wink:
Awesome thanks. To be sure, this is the case even if they aren't mirrored vdevs right? These expanded the total space in the pool. Sorry just want to be extra sure!

Edit: went for it and seems to be working. Had a slight heart attack as the speed was a fraction when doing 1 disk, but it's picked up since then.
 
Last edited:

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,740
To be sure, this is the case even if they aren't mirrored vdevs right?
Sorry, I don't understand. Redundancy (mirror, RAIDZn) is per vdev. You cannot mirror 2 or more vdevs but the vdev can be a mirror of 2 or more disks. So if (I hope so) you have two vdevs, each a RAIDZ2, then of course you can replace a disk from each vdev and still sleep well, because each vdev keeps one disk redundancy. Lose one vdev, lose the entire pool ...

HTH,
Patrick
 
Top