Error importing pool

teldar

Dabbler
Joined
Apr 11, 2018
Messages
18
I got the following error when trying to get my pool back online after I had motherboard issues.

Error: concurrent.futures.process._RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/local/lib/python3.8/concurrent/futures/process.py", line 239, in _process_worker
r = call_item.fn(*call_item.args, **call_item.kwargs)
File "/usr/local/lib/python3.8/site-packages/middlewared/worker.py", line 91, in main_worker
res = MIDDLEWARE._run(*call_args)
File "/usr/local/lib/python3.8/site-packages/middlewared/worker.py", line 45, in _run
return self._call(name, serviceobj, methodobj, args, job=job)
File "/usr/local/lib/python3.8/site-packages/middlewared/worker.py", line 39, in _call
return methodobj(*params)
File "/usr/local/lib/python3.8/site-packages/middlewared/worker.py", line 39, in _call
return methodobj(*params)
File "/usr/local/lib/python3.8/site-packages/middlewared/schema.py", line 977, in nf
return f(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/zfs.py", line 371, in import_pool
self.logger.error(
File "libzfs.pyx", line 391, in libzfs.ZFS.__exit__
File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/zfs.py", line 365, in import_pool
zfs.import_pool(found, new_name or found.name, options, any_host=any_host)
File "libzfs.pyx", line 1095, in libzfs.ZFS.import_pool
File "libzfs.pyx", line 1123, in libzfs.ZFS.__import_pool
libzfs.ZFSException: I/O error
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/middlewared/job.py", line 361, in run
await self.future
File "/usr/local/lib/python3.8/site-packages/middlewared/job.py", line 397, in __run_body
rv = await self.method(*([self] + args))
File "/usr/local/lib/python3.8/site-packages/middlewared/schema.py", line 973, in nf
return await f(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/pool.py", line 1399, in import_pool
await self.middleware.call('zfs.pool.import_pool', pool['guid'], {
File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1238, in call
return await self._call(
File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1203, in _call
return await self._call_worker(name, *prepared_call.args)
File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1209, in _call_worker
return await self.run_in_proc(main_worker, name, args, job)
File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1136, in run_in_proc
return await self.run_in_executor(self.__procpool, method, *args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1110, in run_in_executor
return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
libzfs.ZFSException: ('I/O error',)
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Hey @teldar,

What you described here is not logical...

Should you had only a simple motherboard issue, you would have re-use the same boot device and the same data disks. At the boot, you would not have any need to actually import your pool. It would have loaded by itself as usual.

So please, tell us exactly what happened : the problem, how you fixed it, ... Also, please give us the complete details of your setup.
 

teldar

Dabbler
Joined
Apr 11, 2018
Messages
18
It's not logical. I have a Ryzen 3600 in an ASRock x370 Taichi. I was using a mirrored pair of AData 128gb nVME drives as system drives, one of which failed so I had been running on a single drive for a few weeks. My controller, I was using a LSI 9240 flashed with 9211 firmware, 4TB red drives, switching over to 10TB white drives. My server crashed while I was out of town and when I came home, I swapped identical motherboards. I had an issue with the LSI card working in the computer, so I took it out and connected the drives to the motherboard. (The LSI card appears to need to be NOT in one of the metal reinforced PCIe slots - I think I can get it to work again) I couldn't get freenas to see my main pool and was getting this string of errors, so I popped a couple thumb drives that I had used as my boot drives before swiching to the nVME drives back in, updated to 12, and continued to get errors. So I switched back to the nVME drive and got errors. So I switched back to the thumb drives and did a clean install. More errors.

I got this alert emailed to me from my server while I was working on it. Not sure which iteration.
New alerts:
* Failed to check for alert VolumeStatus:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/alert.py", line 706, in __run_source
alerts = (await alert_source.check()) or []
File "/usr/local/lib/python3.8/site-packages/middlewared/alert/source/volume_status.py", line 31, in check
for vdev in await self.middleware.call("pool.flatten_topology", pool["topology"]):
File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1238, in call
return await self._call(
File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1206, in _call
return await self.run_in_executor(prepared_call.executor, methodobj, *prepared_call.args)
File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1110, in run_in_executor
return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
File "/usr/local/lib/python3.8/site-packages/middlewared/utils/io_thread_pool_executor.py", line 25, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/pool.py", line 438, in flatten_topology
d = deque(sum(topology.values(), []))
AttributeError: 'NoneType' object has no attribute 'values'

I tried to import the pool when I switched to the clean format thumb drives. I got a report that a couple of my drives are unavailable (were at some point in time) but all the smartctl checks out OK on all of them.

The only plugin I had actually runnin was plex. I was just starting to look at sonarr as it looked like my server was working pretty well. I don't believe I had tried to install the plugin. I had tried to upgrade my jail for the plex plugin because I was unable to update it after I updated the system to 12.0. I was getting an error about upgrading instead of updating. I was unable to find any info on the internet about 12.0 breaking the plex plugin update.
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
updated to 12

So you tried to upgrade from FreeNAS 11 or less to TrueNAS 12 with a defective server unable to load its pool ?

That was not a great idea at all...

I did not tried to upgrade any of my server yet. As such, I will let others see what they can do for you here.
 

teldar

Dabbler
Joined
Apr 11, 2018
Messages
18
Should have added. The nVME system drive knows the pool is there but can't access it. When I try to import it on the thumb drives i get the errors I posted.
 

teldar

Dabbler
Joined
Apr 11, 2018
Messages
18
So you tried to upgrade from FreeNAS 11 or less to TrueNAS 12 with a defective server unable to load its pool ?

That was not a great idea at all...

I did not tried to upgrade any of my server yet. As such, I will let others see what they can do for you here.

No. I had upgraded to 12 a few weeks ago. It was all working fine, actually, except for one nVME drive I had removed. The only thing I was having problems with was trying to figure out how to update the Plex plugin. Went out of town, watched a couple movies off it via Plex, came home and found it had crashed. All these other problems have been since Monday. I had all the spare parts available. And I was on whatever the latest version of 11 was. 11.3u4 or something. It was working well enough that I had put two additional drives in and played with them as additional pools, transferring files and playing with setting up shares. It was all working great for a few weeks.


When I got home I swapped motherboards, put everything back in the way it had been, had problems with the LSI card (couldn't get video out with the LSI card next to the video card. Didn't know that the was problem until I tried it in the bottom slot instead of the middle and got video again), took it out, connected the drives right to the motherboard (it's got 10 ports), started it up, and can't access all my data.

I
 
Last edited:

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
It was all working fine, actually, except for one nVME drive I had removed.

So your pool was degraded when your motherboard failed ?

So from what you write :

Last time everything was in order was before your upgrade to TrueNAS. Your hardware was running normally and you were using FreeNAS 11.3.

Next, you tried to upgrade to TrueNAS 12.

It did not succeeded completely and you ended up with a degraded pool (you removed one of your drive).

Then your motherboard failed and your server went down.

You tried to reboot with a new motherboard and same boot device. It failed
You tried to reboot with new boot devices running either 11.3 or 12 and it failed.

Is that what happened ?
 

teldar

Dabbler
Joined
Apr 11, 2018
Messages
18
So your pool was degraded when your motherboard failed ?

So from what you write :

Last time everything was in order was before your upgrade to TrueNAS. Your hardware was running normally and you were using FreeNAS 11.3.

Next, you tried to upgrade to TrueNAS 12.

It did not succeeded completely and you ended up with a degraded pool (you removed one of your drive).

Then your motherboard failed and your server went down.

You tried to reboot with a new motherboard and same boot device. It failed
You tried to reboot with new boot devices running either 11.3 or 12 and it failed.

Is that what happened ?

Edit: I may have misunderstood. Possibly yes. I believe my pool degraded when my motherboard failed and I had issues with my LSI card not letting the new motherboard post properly...


Mostly. The only thing degraded was an nVME drive in the mirrored boot pool that had been removed before I upgraded to 12. I would guess I had run 11.3 for a month? on one drive and had run 12.0 for another few weeks. . The upgrade to TrueNas was smooth and everything worked for weeks, except for the plex update.
Yesterday I swapped motherboards, had problems with the motherboard posting with the LSI card adjacent to my video card, so I shut it down to play with it tonight. And now it can't pick back up my pool. It picked up the individual drives I had put in to play with learning 12. And when i put the thumb drives in with 11.3 on them, they were able to see the individual drive pools as well.
 
Last edited:

teldar

Dabbler
Joined
Apr 11, 2018
Messages
18
The error in the post is what I get when I try to import the pool booting on clean install TrueNas mirrored thumb drives.
 

teldar

Dabbler
Joined
Apr 11, 2018
Messages
18
Solved. Reattached drives to LSI card which was installed in the furthest away PCIe slot.
 

Shama001

Cadet
Joined
Jun 12, 2022
Messages
1
I am facing the same challenge, i am not ablemto to import the pool i get

Error importing pool​

('I/O error',)

Error: concurrent.futures.process._RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/local/lib/python3.9/concurrent/futures/process.py", line 243, in _process_worker
r = call_item.fn(*call_item.args, **call_item.kwargs)
File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 111, in main_worker
res = MIDDLEWARE._run(*call_args)
File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 45, in _run
return self._call(name, serviceobj, methodobj, args, job=job)
File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 39, in _call
return methodobj(*params)
File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 39, in _call
return methodobj(*params)
File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 979, in nf
return f(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 371, in import_pool
self.logger.error(
File "libzfs.pyx", line 391, in libzfs.ZFS.__exit__
File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 365, in import_pool
zfs.import_pool(found, new_name or found.name, options, any_host=any_host)
File "libzfs.pyx", line 1095, in libzfs.ZFS.import_pool
File "libzfs.pyx", line 1123, in libzfs.ZFS.__import_pool
libzfs.ZFSException: I/O error
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 367, in run
await self.future
File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 403, in __run_body
rv = await self.method(*([self] + args))
File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 975, in nf
return await f(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/pool.py", line 1421, in import_pool
await self.middleware.call('zfs.pool.import_pool', pool['guid'], {
File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1256, in call
return await self._call(
File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1221, in _call
return await self._call_worker(name, *prepared_call.args)
File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1227, in _call_worker
return await self.run_in_proc(main_worker, name, args, job)
File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1154, in run_in_proc
return await self.run_in_executor(self.__procpool, method, *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1128, in run_in_executor
return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
libzfs.ZFSException: ('I/O error',)
 
Top