Issue unlocking pool

pylotwolf

Cadet
Joined
Mar 22, 2022
Messages
2
System: 6 1.5tb disks (5+1), GELI encrypted under 11.x, running 12.0-U8 currently
Issue:
Pool went offline while I was on vacation. A disk reported a failed smart check (though it, and all drives, just passed a long smart check and none show errors in the SMART history on disk). I disconnected the pool (saving the encryption key), tried to reimport it, and when I did, the drive which reported failed smart check won't decrypt, and the pool fails to import. When I try to import the pool with the five other drives using the recovery key, it finds the pool, and then also fails to decrypt:

Error: concurrent.futures.process._RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/local/lib/python3.9/concurrent/futures/process.py", line 243, in _process_worker
r = call_item.fn(*call_item.args, **call_item.kwargs)
File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 111, in main_worker
res = MIDDLEWARE._run(*call_args)
File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 45, in _run
return self._call(name, serviceobj, methodobj, args, job=job)
File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 39, in _call
return methodobj(*params)
File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 39, in _call
return methodobj(*params)
File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 979, in nf
return f(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 371, in import_pool
self.logger.error(
File "libzfs.pyx", line 391, in libzfs.ZFS.__exit__
File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 365, in import_pool
zfs.import_pool(found, new_name or found.name, options, any_host=any_host)
File "libzfs.pyx", line 1095, in libzfs.ZFS.import_pool
File "libzfs.pyx", line 1123, in libzfs.ZFS.__import_pool
libzfs.ZFSException: I/O error
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 367, in run
await self.future
File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 403, in __run_body
rv = await self.method(*([self] + args))
File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 975, in nf
return await f(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/pool.py", line 1421, in import_pool
await self.middleware.call('zfs.pool.import_pool', pool['guid'], {
File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1256, in call
return await self._call(
File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1221, in _call
return await self._call_worker(name, *prepared_call.args)
File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1227, in _call_worker
return await self.run_in_proc(main_worker, name, args, job)
File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1154, in run_in_proc
return await self.run_in_executor(self.__procpool, method, *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1128, in run_in_executor
return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
libzfs.ZFSException: ('I/O error',)


I tried to decrypt each drive using the exported key on the command line, and it doesn't work - it asks me for my passphrase, which I do have, but it doesn't work. I also have the recovery key, and that too asks for a passphrase on the command line.

root@freenas:/tmp # geli attach -k pool_store_recovery.key /dev/ada1p2
Enter passphrase:

root@freenas:/tmp # geli attach -k pool_store_encryption.key /dev/ada1p2
Enter passphrase:
geli: Wrong key for ada1p2.
geli: There was an error with at least one provider.

The recovery key worked fine in the UI without a passphrase. How can I use it on the command line to test the individual disks? Help!
 

pylotwolf

Cadet
Joined
Mar 22, 2022
Messages
2
Found out the error was not surfacing in the UI, ran the following on the command line:

root@freenas:/data/geli # zpool import store
cannot import 'store': I/O error
Recovery is possible, but will result in some data loss.
Returning the pool to its state as of Mon Feb 28 00:00:05 2022
should correct the problem. Approximately 5 seconds of data
must be discarded, irreversibly. After rewind, at least
one persistent user-data error will remain. Recovery can be attempted
by executing 'zpool import -F store'. A scrub of the pool
is strongly recommended after recovery.
root@freenas:/data/geli # zpool import -F store

And the pool correctly imported.
 
Top