After reboot all drives will not decrypt

mydani

Cadet
Joined
Feb 21, 2021
Messages
8
Hello guys,

I have little hope left, so let me tell you what I did and maybe you can tell me whether I screwed up.

Was running a FreeNAS-11.3-RELEASE w/o issues for long. Now my 1st dataset was running full, so I:
  • added 3 new drives
  • disconnected 1 of the 4 of the original pool due to number of connector on the board.
  • Started freenas and created a 2nd pool with the 3 new drives
  • Did do the snapshot, zfs send | zfs receive to the new pool...
  • unfortunately it was interrupted (closed the terminal *arg*).so I had to delete the new pool. That's what I did in the UI, and I did only delete the _new_ pool.
  • Restarted my Freenas system, and wanted to unlock the original pool, then this error comes since.

Error: concurrent.futures.process._RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/local/lib/python3.7/concurrent/futures/process.py", line 239, in _process_worker
r = call_item.fn(*call_item.args, **call_item.kwargs)
File "/usr/local/lib/python3.7/site-packages/middlewared/worker.py", line 95, in main_worker
res = loop.run_until_complete(coro)
File "/usr/local/lib/python3.7/asyncio/base_events.py", line 579, in run_until_complete
return future.result()
File "/usr/local/lib/python3.7/site-packages/middlewared/worker.py", line 51, in _run
return await self._call(name, serviceobj, methodobj, params=args, job=job)
File "/usr/local/lib/python3.7/site-packages/middlewared/worker.py", line 43, in _call
return methodobj(*params)
File "/usr/local/lib/python3.7/site-packages/middlewared/worker.py", line 43, in _call
return methodobj(*params)
File "/usr/local/lib/python3.7/site-packages/middlewared/schema.py", line 964, in nf
return f(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/middlewared/plugins/zfs.py", line 382, in import_pool
zfs.import_pool(found, found.name, options, any_host=any_host)
File "libzfs.pyx", line 369, in libzfs.ZFS.__exit__
File "/usr/local/lib/python3.7/site-packages/middlewared/plugins/zfs.py", line 382, in import_pool
zfs.import_pool(found, found.name, options, any_host=any_host)
File "libzfs.pyx", line 870, in libzfs.ZFS.import_pool
libzfs.ZFSException: no such pool or dataset
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/middlewared/plugins/pool.py", line 1656, in unlock
'cachefile': ZPOOL_CACHE_FILE,
File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 1127, in call
app=app, pipes=pipes, job_on_progress_cb=job_on_progress_cb, io_thread=True,
File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 1074, in _call
return await self._call_worker(name, *args)
File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 1094, in _call_worker
return await self.run_in_proc(main_worker, name, args, job)
File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 1029, in run_in_proc
return await self.run_in_executor(self.__procpool, method, *args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 1003, in run_in_executor
return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
libzfs.ZFSException: ('no such pool or dataset',)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/middlewared/job.py", line 349, in run
await self.future
File "/usr/local/lib/python3.7/site-packages/middlewared/job.py", line 386, in __run_body
rv = await self.method(*([self] + args))
File "/usr/local/lib/python3.7/site-packages/middlewared/schema.py", line 960, in nf
return await f(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/middlewared/plugins/pool.py", line 1668, in unlock
raise CallError(msg)
middlewared.service_exception.CallError: [EFAULT] Pool could not be imported: 3 devices failed to decrypt.


It says "[EFAULT] Pool could not be imported: 3 devices failed to decrypt. " I reconnected the disconnected 4th drive for the original pool, and this one seems fine.
What could have gone wrong and what can I try to fix it?

Best Regards,
Daniel

PS:

root@datengrab:~ # zpool status
pool: freenas-boot
state: ONLINE
scan: scrub repaired 0 in 0 days 00:05:59 with 0 errors on Mon Feb 22 03:50:59 2021
config:

NAME STATE READ WRITE CKSUM
freenas-boot ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
da0p2 ONLINE 0 0 0
da1p2 ONLINE 0 0 0

errors: No known data errors
 

mydani

Cadet
Joined
Feb 21, 2021
Messages
8
Some more info for you guys:

root@datengrab:~ # camcontrol devlist
<TOSHIBA HDWD130 MX6OACF0> at scbus0 target 0 lun 0 (pass0,ada0)
<TOSHIBA HDWD130 MX6OACF0> at scbus1 target 0 lun 0 (pass1,ada1)
<Hitachi HDS5C3030BLE630 MZ6OAAB0> at scbus2 target 0 lun 0 (pass2,ada2)
<TOSHIBA HDWD130 MX6OACF0> at scbus3 target 0 lun 0 (pass3,ada3)
<AHCI SGPIO Enclosure 2.00 0001> at scbus6 target 0 lun 0 (pass4,ses0)
<TOSHIBA USB FLASH DRIVE > at scbus8 target 0 lun 0 (pass5,da0)
<TOSHIBA USB FLASH DRIVE > at scbus9 target 0 lun 0 (pass6,da1)

root@datengrab:~ # zpool import
root@datengrab:~ #
 

mydani

Cadet
Joined
Feb 21, 2021
Messages
8
What I found: disks ada0,ada1,ada2,ada3 - ada1 being the one I reconnected.
gpart list will show "No such geom" for ada0,ada2,ada3.
Looks like partition information is lost. Any clue whether it would make sense trying to restore them?
 

mydani

Cadet
Joined
Feb 21, 2021
Messages
8
My system:

  • Motherboard make and model: Intel genuine server board
  • CPU make and model: Intel Xeon CPU E3-1230 V2 @ 3.30GHz
  • RAM quantity : 32GB
  • Hard drives: 4 x 3TB, Toshiba MX6OACF0
  • Hard disk controllers: Intel Cougar Point AHCI SATA controller
  • Network cards: 2 x Intel(R) PRO/1000 Network Connection 7.6.1-k, 1 connected
 

mydani

Cadet
Joined
Feb 21, 2021
Messages
8
Any clue highly appreciated, also missing information I will provide on the fly... :)
 

mydani

Cadet
Joined
Feb 21, 2021
Messages
8
What is also confusing is, that the UI knows about the pool, but on the CLI, zpool doesn't list it.
Even if I manage to redo the partition table, which is clear how to do it, how do I get the UI and CLI in sync again?
 

mydani

Cadet
Joined
Feb 21, 2021
Messages
8
Maybe one more update, I cloned one drive, backed up the partition table of the untouched drive, restored the partition table on the clone via gpart restore. Then I dumped the geli information from the untouched drive and restored it on the clone.
What I can do now is attach the clone using geli attach ... Not sure what this tells me, but seems a step forward.
 

mydani

Cadet
Joined
Feb 21, 2021
Messages
8
Did what was the most reasonable thing - migrated to Synology NAS, now everything is fine. bb
 
Top