Cannot (re-)attach encrypted zfs volume

Status
Not open for further replies.

Goldman2k

Cadet
Joined
Oct 2, 2015
Messages
4
Dear Community,
After reading many posts I could not find a solution to my current problem.
I am running FreeNAS-9.3-STABLE-201509220011 on Intel(R) Core(TM) i3-4160 CPU @ 3.60GHz, and had the "stupid" idea to add 2x2.0 TB disks to my existing encrypted zfs volume "Volume_A" (2x4.0 TB). The Volume Manager worked but I could not see an increase from 8.0 TB to 12.0 TB. The size remained at 8.0 TB.
I had not formatted the disks before, so I tried but it was not possible because they were linked to "Volume_A". I followed the manual and detached "Volume_A" (no deletion and kept the structure). After sucessfull formatting the 2x2 TB disks I started to import the encrypted "Volume_A" using the GUI see below, but was not successful.

What did I miss out here?

P.S. I believe that I did not put a pwd on the geli.key but I had put one to unlock the disks after a reboot. When I put this pwd into step 2 then I got a decryption error.

Thx in advance four your feedback!
 

Attachments

  • S_01_Disks.JPG
    S_01_Disks.JPG
    114.6 KB · Views: 284
  • S_02_Volumes_empty.JPG
    S_02_Volumes_empty.JPG
    103.1 KB · Views: 357
  • S_03_Import_Volumes_1_3.JPG
    S_03_Import_Volumes_1_3.JPG
    88.3 KB · Views: 273
  • S_03_Import_Volumes_2_3.JPG
    S_03_Import_Volumes_2_3.JPG
    118.6 KB · Views: 330
  • S_03_Import_Volumes_3_3.JPG
    S_03_Import_Volumes_3_3.JPG
    86.4 KB · Views: 272
  • S_03_Import_Volumes_3_3b.JPG
    S_03_Import_Volumes_3_3b.JPG
    113.4 KB · Views: 277
  • S_03_Import_Volumes_4.JPG
    S_03_Import_Volumes_4.JPG
    67.5 KB · Views: 274

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
I followed the manual and detached "Volume_A" (no deletion and kept the structure). After sucessfull formatting the 2x2 TB disks I started to import the encrypted "Volume_A" using the GUI see below, but was not successful.
It appears that you added a vdev to an existing pool, then removed that vdev by physically removing and wiping its component drives. If so, you destroyed your pool.
 

Goldman2k

Cadet
Joined
Oct 2, 2015
Messages
4
You are absolutely right. To "detach" the hole pool was my second mistake that night.
The pool "Volume_A" is still there but two devices are "missing". I am in preparation to forge the two devices to bring the pool back as "degraded" and read only.
I found a high level description from 2012. Is there anywhere a more recent and detailed description?
There is no zpool option ("-m") to import the pool even with devices missing?
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
There is no zpool option ("-m") to import the pool even with devices missing?
  • pools are made from one or more vdevs
  • vdevs are made from one or more drives
  • a vdev can survive the loss of a drive if it has sufficient redundancy
  • a pool cannot survive the loss of any vdev
Like I said, it appears that you removed a vdev from the pool and then wiped its constituent drives, in which case you destroyed the pool.
had the "stupid" idea to add 2x2.0 TB disks to my existing encrypted zfs volume "Volume_A" (2x4.0 TB)
Exactly what steps did you go through in the volume manager when you did this?
 

Goldman2k

Cadet
Joined
Oct 2, 2015
Messages
4
Volume Manager, Add storage to an existing pool
- Volume to extend : “Volume_A”
- Available disks: Selected the 2 available disks (2.0 TB 2 drives)
- Selected “Stripe”
- Add Volume
Operation was shown as completed successful.
Observations:
1. The Password protection for the “Volume_A” went away. I did not create a new password, encryption key, nor recovery key.
2. The total “Volume_A” pool size remained at 8.0 TB, not increased to 12.0 TB
Then I decided to detach the hole pool (instead of just the 2 new disks – as I know better how).
Selected "Volume_A",
- Selected Detach ”Volume_A” pool, got warning to save the key. Saved it.
- Unselect “wipe data”
- Unselect ”wipe structure”
- Confirmed
Operation was shown as completed successfully and the pool was gone.
See the zpool import output attached.

----------------------------------------------------------------------------------------------
Last login: Tue Oct 6 22:12:16 2015 from 192.168.178.23
FreeBSD 9.3-RELEASE-p25 (FREENAS.amd64) #0 r281084+d3a5bf7: Tue Sep 15 17:52:0
PDT 2015

FreeNAS (c) 2009-2015, The FreeNAS Development Team
All rights reserved.
FreeNAS is released under the modified BSD license.

For more information, documentation, help or support, go here:
http://freenas.org
Welcome to FreeNAS
[root@server] ~# cd /
[root@server] /# zpool import
pool: Volume_A
id: 1898007221997160708
state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
see: http://illumos.org/msg/ZFS-8000-6X
config:

Volume_A UNAVAIL missing device
gptid/d0ecbbb2-5d5f-11e5-bfaa-d050996cab76.eli ONLINE
gptid/d174f7ed-5d5f-11e5-bfaa-d050996cab76.eli ONLINE

Additional devices are known to be part of this pool, though their
exact configuration cannot be determined.
[root@server] /#
 

Attachments

  • zpool import.JPG
    zpool import.JPG
    84.5 KB · Views: 271

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
Volume Manager, Add storage to an existing pool
- Volume to extend : “Volume_A”
- Available disks: Selected the 2 available disks (2.0 TB 2 drives)
- Selected “Stripe”
- Add Volume
Operation was shown as completed successful.
It seems you added a 2-drive stripe vdev to your pool, which was an existing 2-drive stripe vdev. Although it's too late now, for future reference, a stripe has no redundancy and is less reliable than a single disk. If this was not what you intended, please read this guide before creating your new pool.
Selected "Volume_A",
- Selected Detach ”Volume_A” pool, got warning to save the key. Saved it.
- Unselect “wipe data”
- Unselect ”wipe structure”
- Confirmed
Operation was shown as completed successfully and the pool was gone.
At this point you could have moved your pool to a new machine, or reimported it on your existing machine. That's the purpose of detaching a pool (assuming you don't want to "wipe data".

Unfortunately, at this point you did this:
After sucessfull formatting the 2x2 TB disks
The effect of this was to destroy the 2nd 2-drive stripe vdev, and thus your entire pool.
 

Goldman2k

Cadet
Joined
Oct 2, 2015
Messages
4
thx for the explanation and link.
for my understanding: why did the pool volume not extend from 8TB to 12TB after I had added the two 2.0 TB disks?

To recover I intend to create a FreeNAS on a virtual machine with the same pool setup and read the pool device id's from the 2 x 2.0 TB disks and write them into the physical ones.
I hope to convince the zpool to accept them and bring the pool back das "degraded" read-only.
Can the pool device id be calculated out of the pool configuration/disk settings or is the virtual simulation necessary?
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
why did the pool volume not extend from 8TB to 12TB after I had added the two 2.0 TB disks?
Sorry, I have no idea.
I hope to convince the zpool to accept them and bring the pool back das "degraded" read-only.
I doubt that will work, but please report back either way. Good luck!
Can the pool device id be calculated out of the pool configuration/disk settings or is the virtual simulation necessary?
Again, sorry, I have no idea.

Don't be fooled by the "FreeNAS Guru" label in my profile, that's merely what the forum software inserts based on post count ;)
 

Cheejyg

Dabbler
Joined
Dec 11, 2016
Messages
31
You could've just done
Code:
zpool import Volume_A
or I believe it's
Code:
zpool import -m Volume_A
The latter is the correct one I think (I forgot what was the parameter needed, just do the first command and they'll tell you).

Then after you successfully import it, you can do a
Code:
zpool status Volume_A
and they'll tell you which of the devices/drives are missing.

At least that's what happened in my case.
 
Status
Not open for further replies.
Top