Hi all,
for a long time I only was a silent reader of this forum but now I could realy need some help fixing my messed up pool.
System information:
Motherboard make and model
- ASRock Rack E3C236D2I Intel C236
CPU make and model
- Intel(R) Pentium(R) CPU G4560
RAM quantity
- Samsung 16GB 2Rx8 PC4 - 2133P ECC
Hard drives, quantity, model numbers, and RAID configuration, including boot drives
- 4x WD Red 4TB, 1 SSD, 1 USB Stick as boot device
Hard disk controllers
- onboard
Network cards
- onboard
Version:
FreeNAS-11.3-U2
What happend?:
In a mood of spring cleaning/corona bordedom I descied to clean up my freenas a little bit.
I had 2 Pools in freenas. ("Daten" on the WD Red's and "VMs" on the SSD). "VMs" was only for testing around with some Jails/VMs and wasn't needed anymore.
I deleted the Pool "VMs" over Pools --> Export/disconnect pool and checked "Destroy data on this pool" and "Delete configuration of shares that used this pool".
After that I wasn't able to unlock the other Pool "Daten" at all. Tried everything I found on Searching the Forum but nothing seems to solve my problem.
On trying to unlock with passphrase it says: "FAILED - [EFAULT] Pool could not be imported: 4 devices failed to decrypt."
Also trying to use recovery key (geli.key and geli_recovery.key) with the same result.
Seems there is a mess with the disks and the pool but I have no idea how to fix it.
Maybe someone of you had a similar problem or knows how to deal with it.
Disks in freenas are showing up as "Unused" but the Pool is still there...
Already tried diffrent things I found in others posts but nothing worked for me.
What I have already tried:
Some hopefully usefull information
Thanks!
for a long time I only was a silent reader of this forum but now I could realy need some help fixing my messed up pool.
System information:
Motherboard make and model
- ASRock Rack E3C236D2I Intel C236
CPU make and model
- Intel(R) Pentium(R) CPU G4560
RAM quantity
- Samsung 16GB 2Rx8 PC4 - 2133P ECC
Hard drives, quantity, model numbers, and RAID configuration, including boot drives
- 4x WD Red 4TB, 1 SSD, 1 USB Stick as boot device
Hard disk controllers
- onboard
Network cards
- onboard
Version:
FreeNAS-11.3-U2
What happend?:
In a mood of spring cleaning/corona bordedom I descied to clean up my freenas a little bit.
I had 2 Pools in freenas. ("Daten" on the WD Red's and "VMs" on the SSD). "VMs" was only for testing around with some Jails/VMs and wasn't needed anymore.
I deleted the Pool "VMs" over Pools --> Export/disconnect pool and checked "Destroy data on this pool" and "Delete configuration of shares that used this pool".
After that I wasn't able to unlock the other Pool "Daten" at all. Tried everything I found on Searching the Forum but nothing seems to solve my problem.
On trying to unlock with passphrase it says: "FAILED - [EFAULT] Pool could not be imported: 4 devices failed to decrypt."
Code:
Error: concurrent.futures.process._RemoteTraceback: """ Traceback (most recent call last): File "/usr/local/lib/python3.7/concurrent/futures/process.py", line 239, in _process_worker r = call_item.fn(*call_item.args, **call_item.kwargs) File "/usr/local/lib/python3.7/site-packages/middlewared/worker.py", line 97, in main_worker res = loop.run_until_complete(coro) File "/usr/local/lib/python3.7/asyncio/base_events.py", line 579, in run_until_complete return future.result() File "/usr/local/lib/python3.7/site-packages/middlewared/worker.py", line 53, in _run return await self._call(name, serviceobj, methodobj, params=args, job=job) File "/usr/local/lib/python3.7/site-packages/middlewared/worker.py", line 45, in _call return methodobj(*params) File "/usr/local/lib/python3.7/site-packages/middlewared/worker.py", line 45, in _call return methodobj(*params) File "/usr/local/lib/python3.7/site-packages/middlewared/schema.py", line 965, in nf return f(*args, **kwargs) File "/usr/local/lib/python3.7/site-packages/middlewared/plugins/zfs.py", line 390, in import_pool 'Failed to mount datasets after importing "%s" pool: %s', name_or_guid, str(e), exc_info=True File "libzfs.pyx", line 369, in libzfs.ZFS.__exit__ File "/usr/local/lib/python3.7/site-packages/middlewared/plugins/zfs.py", line 380, in import_pool raise CallError(f'Pool {name_or_guid} not found.', errno.ENOENT) middlewared.service_exception.CallError: [ENOENT] Pool 11159862125174688996 not found. """ The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.7/site-packages/middlewared/plugins/pool.py", line 1661, in unlock 'cachefile': ZPOOL_CACHE_FILE, File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 1131, in call app=app, pipes=pipes, job_on_progress_cb=job_on_progress_cb, io_thread=True, File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 1078, in _call return await self._call_worker(name, *args) File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 1098, in _call_worker return await self.run_in_proc(main_worker, name, args, job) File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 1033, in run_in_proc return await self.run_in_executor(self.__procpool, method, *args, **kwargs) File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 1007, in run_in_executor return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs)) middlewared.service_exception.CallError: [ENOENT] Pool 11159862125174688996 not found. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.7/site-packages/middlewared/job.py", line 349, in run await self.future File "/usr/local/lib/python3.7/site-packages/middlewared/job.py", line 386, in __run_body rv = await self.method(*([self] + args)) File "/usr/local/lib/python3.7/site-packages/middlewared/schema.py", line 961, in nf return await f(*args, **kwargs) File "/usr/local/lib/python3.7/site-packages/middlewared/plugins/pool.py", line 1673, in unlock raise CallError(msg) middlewared.service_exception.CallError: [EFAULT] Pool could not be imported: 4 devices failed to decrypt.
Also trying to use recovery key (geli.key and geli_recovery.key) with the same result.
Seems there is a mess with the disks and the pool but I have no idea how to fix it.
Maybe someone of you had a similar problem or knows how to deal with it.
Disks in freenas are showing up as "Unused" but the Pool is still there...
Already tried diffrent things I found in others posts but nothing worked for me.
What I have already tried:
- Rebooting
- Switching back to an older Boot Environment
- Use geli.key / geli_recovery.key (without passphrase) over the GUI
- Backuped the existing geli.key and copied my backuped one to /data/geli/ and named it 447eff41-ba55-4dcb-8f8d-db5d90c8f654.key
Some hopefully usefull information
Code:
root@freenas:~ # zpool status -v pool: freenas-boot state: ONLINE scan: scrub repaired 0 in 0 days 00:02:18 with 0 errors on Mon Apr 6 03:47:18 2020 config: NAME STATE READ WRITE CKSUM freenas-boot ONLINE 0 0 0 da0p2 ONLINE 0 0 0 errors: No known data errors
Code:
root@freenas:~ # sqlite3 /data/freenas-v1.db 'select * from storage_volume;' 1|Daten|11159862125174688996|2|447eff41-ba55-4dcb-8f8d-db5d90c8f654
Code:
root@freenas:/dev/gptid # ls -al total 1 dr-xr-xr-x 2 root wheel 512 Apr 11 18:59 . dr-xr-xr-x 12 root wheel 512 Apr 11 18:59 .. crw-r----- 1 root operator 0x8a Apr 11 18:59 d10485a4-8d18-11e5-92af-d05099c12db9
Code:
root@freenas:~ # camcontrol devlist <WDC WD40EFRX-68N32N0 82.00A82> at scbus0 target 0 lun 0 (pass0,ada0) <WDC WD40EFRX-68N32N0 82.00A82> at scbus1 target 0 lun 0 (pass1,ada1) <WDC WD40EFRX-68WT0N0 82.00A82> at scbus2 target 0 lun 0 (pass2,ada2) <WDC WD40EFRX-68N32N0 82.00A82> at scbus3 target 0 lun 0 (pass3,ada3) <Samsung SSD 850 EVO 250GB EMT03B6Q> at scbus4 target 0 lun 0 (pass4,ada4) <AHCI SGPIO Enclosure 2.00 0001> at scbus6 target 0 lun 0 (pass5,ses0) <MUSHKIN MKNUFDVS16GB PMAP> at scbus8 target 0 lun 0 (pass6,da0)
Code:
root@freenas:~ # glabel status Name Status Components label/efibsd N/A da0p1 gptid/d10485a4-8d18-11e5-92af-d05099c12db9 N/A da0p1
Code:
root@freenas:~ # gpart show => 40 30949296 da0 GPT (15G) 40 504 - free - (252K) 544 532480 1 efi (260M) 533024 30416304 2 freebsd-zfs (15G) 30949328 8 - free - (4.0K)
Code:
root@freenas:/ # ls -al /data/geli total 4 drwxrwxrwx 2 root www 3 May 21 2018 . drwxr-xr-x 8 www www 14 Apr 11 19:01 .. -rw-rw-rw- 1 root www 64 May 21 2018 447eff41-ba55-4dcb-8f8d-db5d90c8f654.key
Code:
root@freenas:/ # sqlite3 /data/freenas-v1.db 'select * from storage_encrypteddisk;' 1|1|{serial_lunid}WD-WCC7K5XYT8CP_50014ee263e4342f|gptid/39f04989-f194-11e7-8cff-d05099c12db9 2|1|{serial_lunid}WD-WCC7K0AX8EHC_50014ee2b939cb9c|gptid/3c419561-f194-11e7-8cff-d05099c12db9 3|1|{serial_lunid}WD-WCC4E4LHU01N_50014ee261a10e07|gptid/3e9a33d0-f194-11e7-8cff-d05099c12db9 4|1|{serial_lunid}WD-WCC7K0DA9NLP_50014ee264cf5b58|gptid/404437f9-f194-11e7-8cff-d05099c12db9 root@freenas:/ #
Thanks!