Cannot import and heal a pool

maximos13

Cadet
Joined
Mar 16, 2022
Messages
6
I had a RAIDZ1 5 drive configuration, one of the drives died, I replaced it and now I cannot import the pool. Please help!
Error log:
Error: concurrent.futures.process._RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/local/lib/python3.9/concurrent/futures/process.py", line 243, in _process_worker
r = call_item.fn(*call_item.args, **call_item.kwargs)
File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 111, in main_worker
res = MIDDLEWARE._run(*call_args)
File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 45, in _run
return self._call(name, serviceobj, methodobj, args, job=job)
File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 39, in _call
return methodobj(*params)
File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 39, in _call
return methodobj(*params)
File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 979, in nf
return f(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 371, in import_pool
self.logger.error(
File "libzfs.pyx", line 391, in libzfs.ZFS.__exit__
File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 365, in import_pool
zfs.import_pool(found, new_name or found.name, options, any_host=any_host)
File "libzfs.pyx", line 1095, in libzfs.ZFS.import_pool
File "libzfs.pyx", line 1123, in libzfs.ZFS.__import_pool
libzfs.ZFSException: I/O error
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 367, in run
await self.future
File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 403, in __run_body
rv = await self.method(*([self] + args))
File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 975, in nf
return await f(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/pool.py", line 1421, in import_pool
await self.middleware.call('zfs.pool.import_pool', pool['guid'], {
File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1256, in call
return await self._call(
File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1221, in _call
return await self._call_worker(name, *prepared_call.args)
File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1227, in _call_worker
return await self.run_in_proc(main_worker, name, args, job)
File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1154, in run_in_proc
return await self.run_in_executor(self.__procpool, method, *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1128, in run_in_executor
return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
libzfs.ZFSException: ('I/O error',)

All drives are in good health:
1647461200244.png
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
The instruction is there in the output...

zpool import -f max should do it, but with the data corruption mentioned, it may be necessary to use -F. Without your parity, you may be unable to recover the pool with the combination of corruption and a missing disk... one of the reasons the forum strongly recommends RAIDZ2.

zpool export max and then import it with the GUI again if you want it back to normal after a successful manual import at CLI.
 

maximos13

Cadet
Joined
Mar 16, 2022
Messages
6
Thank you so much. Thing is that I changed also ther host machine, I don't have access to it, only disks remained. So on the new machine I:
zpool import -f max

1647513928992.png

I think volumes MAX1-MAX4 still contain my data, only MAX5 needs to be replaced for redundancy.
What can I try else to mount pool? I think my problem is that I had both: dying disk AND change of host. What should I do now?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Well, first of all, it seems you didn't use the command I suggested. You left off the pool name, so we don't know if it would work or not.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
So, that's telling you there's at least one of the remaining disks that isn't behaving correctly, leaving you unable to get the pool to import.

You can perhaps try with -F instead, although I haven't seen many cases where that will work.

Beyond that, you're left with the suggestion of the output... destroy and re-create the pool from backup (or if you don't have a backup and/or are desperate to get files back from the pool... Klennet... you'll need a Windows machine and if you are really worried about the data, a dd copy of each disk to work on, then you can see what might be recovered... and decide if you want to pay... it's not cheap, so you'll have to decide).

There's also an "open source" option, which may or may not provide some options to recover:

https://github.com/Stefan311/ZfsSpy
 

maximos13

Cadet
Joined
Mar 16, 2022
Messages
6
Thank you for your help, sretalla!))

But what is the main problem? The fact that device changed on which the array ran? I thought the RAIDZ could heal itself after replacing one of drives? You cannot import an array that is not healthy?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
But what is the main problem?
One missing disk and one or more other disks with issues or corrupt data on them, leaving you with no parity data to recover from as you're already down by 1 in a RAIDZ1

The fact that device changed on which the array ran?
Not so much of a problem when the pool is healthy.

I thought the RAIDZ could heal itself after replacing one of drives?
It can if you can import it. It all depends what got corrupted on the remaining disks... if there were no corruption/problems with them, it would import.

You cannot import an array that is not healthy?
You can if there's enough healthy items remaining.
 

maximos13

Cadet
Joined
Mar 16, 2022
Messages
6
Thank you so much sretalla!
I will try -F, then zfsspy and then will look for klennet utility in my surroundings. BTW had someone in this community ever used it?
 

zsw12abc

Dabbler
Joined
Nov 22, 2022
Messages
25
Hey Mate,
I face the same issue as u did
Did you fix it with
Code:
zpool import -F [poolname]
?
 
Top