Pool has gone offline after forced reboot

Daxter304

Cadet
Joined
Nov 30, 2021
Messages
4
Hi there,
I am new to using TrueNAS, but not new to using the terminal (Linux and such).
I had a jail with Plex running on it, and Plex seemed to be hung up so I restarted the jail. Well, it got stuck restarting, and when I reloaded the page it dissappeared. So I told my server to restart, but that hung up too. After waiting several minutes I forced the server off, then turned it back on.
When it came back online I discovered this:
Screen Shot 2021-11-30 at 08.11.31.png
Clicking EXPORT/DISCONNECT brings this up:
Screen Shot 2021-11-30 at 08.06.33.png


So after seeing this I knew I screwed up, obviously this system does not handle force restarts too well...
So I set out to try and fix this, and well... after reading many threads all over the place I kept landing back at the same message:
Screen Shot 2021-11-30 at 08.08.10.png


But when I try to do a zpool -F import my system just hangs for a moment then restarts and nothing changes.
I did manage to get a photo from directly connecting to the server and running the command again, this is what it says:
IMG_20211130_073905.jpg


So that all lands me here... I dont know what to do but I'd love to recover this pool.
If not, then at least I just want to recover the data, which I'm also not sure how to do.
Please help, thank you!

Here is what seems like some more helpful/relevant data (The pool is on ada1):
Screen Shot 2021-11-30 at 08.08.35.png
Screen Shot 2021-11-30 at 08.09.25.png
 

Alecmascot

Guru
Joined
Mar 18, 2014
Messages
1,177
I think your single drive pool is toast.
You could try and export from the gui. Only have the confirm box ticked.
If it works then import it via the gui.

Otherwise destroy and recreate the pool and restore from your backup.
 

Daxter304

Cadet
Joined
Nov 30, 2021
Messages
4
I think your single drive pool is toast.
You could try and export from the gui. Only have the confirm box ticked.
If it works then import it via the gui.

Otherwise destroy and recreate the pool and restore from your backup.
Yeah I'll give that a shot.
If that doesn't work I don't have a backup, I created this 4 days ago and have never used TrueNAS before so I hadn't gotten to figuring out backups yet.
 

Daxter304

Cadet
Joined
Nov 30, 2021
Messages
4
Ok importing the pool gives this error:
1638310992861.png

Code:
Error: concurrent.futures.process._RemoteTraceback:
"""
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/concurrent/futures/process.py", line 243, in _process_worker
    r = call_item.fn(*call_item.args, **call_item.kwargs)
  File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 94, in main_worker
    res = MIDDLEWARE._run(*call_args)
  File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 45, in _run
    return self._call(name, serviceobj, methodobj, args, job=job)
  File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 39, in _call
    return methodobj(*params)
  File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 39, in _call
    return methodobj(*params)
  File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 979, in nf
    return f(*args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 371, in import_pool
    self.logger.error(
  File "libzfs.pyx", line 391, in libzfs.ZFS.__exit__
  File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 365, in import_pool
    zfs.import_pool(found, new_name or found.name, options, any_host=any_host)
  File "libzfs.pyx", line 1095, in libzfs.ZFS.import_pool
  File "libzfs.pyx", line 1123, in libzfs.ZFS.__import_pool
libzfs.ZFSException: I/O error
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 367, in run
    await self.future
  File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 403, in __run_body
    rv = await self.method(*([self] + args))
  File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 975, in nf
    return await f(*args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/pool.py", line 1421, in import_pool
    await self.middleware.call('zfs.pool.import_pool', pool['guid'], {
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1256, in call
    return await self._call(
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1221, in _call
    return await self._call_worker(name, *prepared_call.args)
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1227, in _call_worker
    return await self.run_in_proc(main_worker, name, args, job)
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1154, in run_in_proc
    return await self.run_in_executor(self.__procpool, method, *args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1128, in run_in_executor
    return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
libzfs.ZFSException: ('I/O error',)
 

Daxter304

Cadet
Joined
Nov 30, 2021
Messages
4
I ended up nuking everything related to that pool and re-creating it all. I don't know why a forced shutdown would screw it up that much but hopefully it won't happen again.
 
Top