Volume import error

Min-3

Cadet
Joined
Jun 19, 2021
Messages
6
Hello,
I am still quite a beginner on Truenas which is why I came to seek help here.

Yesterday evening I turned off my server like every night for a short week because at night the heat is unbearable at home.
When I wanted to restart it this morning the server struggled a bit but I ended up on the web interface in order to start the different iocage services on the different volumes. (There are 2, one dedicated to video surveillance and the other to nextcloud smb mass storage etc)

And what was my surprise when I saw that my volume 1 was offline while on the dedicated raid controller the disks and volumes are present.

I made a few attempts (like, shutting down the server, disconnecting the disks and reconnecting them, then restarting the server) but nothing was working.

So I said to myself that I was going to export the volume and re-import it.
The export went well but the import gives me this error:

Erreur: concurrent.futures.process._RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/local/lib/python3.9/concurrent/futures/process.py", line 243, in _process_worker
r = call_item.fn(*call_item.args, **call_item.kwargs)
File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 94, in main_worker
res = MIDDLEWARE._run(*call_args)
File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 45, in _run
return self._call(name, serviceobj, methodobj, args, job=job)
File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 39, in _call
return methodobj(*params)
File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 39, in _call
return methodobj(*params)
File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 977, in nf
return f(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 371, in import_pool
self.logger.error(
File "libzfs.pyx", line 391, in libzfs.ZFS.__exit__
File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 365, in import_pool
zfs.import_pool(found, new_name or found.name, options, any_host=any_host)
File "libzfs.pyx", line 1095, in libzfs.ZFS.import_pool
File "libzfs.pyx", line 1123, in libzfs.ZFS.__import_pool
libzfs.ZFSException: I/O error
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 367, in run
await self.future
File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 403, in __run_body
rv = await self.method(*([self] + args))
File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 973, in nf
return await f(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/pool.py", line 1411, in import_pool
await self.middleware.call('zfs.pool.import_pool', pool['guid'], {
File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1241, in call
return await self._call(
File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1206, in _call
return await self._call_worker(name, *prepared_call.args)
File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1212, in _call_worker
return await self.run_in_proc(main_worker, name, args, job)
File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1139, in run_in_proc
return await self.run_in_executor(self.__procpool, method, *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1113, in run_in_executor
return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
libzfs.ZFSException: ('I/O error',)
 
Joined
Jan 4, 2014
Messages
1,644

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
dedicated raid controller

But let's see if it's going to be recoverable. Run zpool import and post the output of that command.
 

Min-3

Cadet
Joined
Jun 19, 2021
Messages
6
Are you using h/w RAID?
Yes for almost 8 months now and I had never had the slightest problem until this morning.
To be more complete with the raid cluster which is currently causing me a problem, these are 3 hard drives of 2 TB WD intended for the NAS and they work well, they are always recognized by the raid controller without error message
 

Min-3

Cadet
Joined
Jun 19, 2021
Messages
6

But let's see if it's going to be recoverable. Run zpool import and post the output of that command.
TrueNAS reads perfectly the virtual disks resulting from the controller raid however. Even the RAID 1 dedicated to video surveillance poses no problem :/
So I don't see the connection with a driver error ...
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
I'm not going to argue the point with you. You're using a dangerous and unsupported configuration. Had you bothered to ask, or read any of the copious documentation surrounding hardware choices, you would have known better--but you decided to head off on your own with this hardware. It's your data of course, and your system may be case # 792 of "don't use hardware RAID". But, as I said earlier and you seem to have ignored:
But let's see if it's going to be recoverable. Run zpool import and post the output of that command.
 

Min-3

Cadet
Joined
Jun 19, 2021
Messages
6
I'm not going to argue the point with you. You're using a dangerous and unsupported configuration. Had you bothered to ask, or read any of the copious documentation surrounding hardware choices, you would have known better--but you decided to head off on your own with this hardware. It's your data of course, and your system may be case # 792 of "don't use hardware RAID". But, as I said earlier and you seem to have ignored:
I see I see :/
(it is a recovery server not having the means to invest in a new machine I had to fall back on what I could take in an emergency)

Here is the output of zpool import:

root@futurhome[~]# zpool import
pool: FH_R5_Vol1
id: 2797074274577963080
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

FH_R5_Vol1 ONLINE
gptid/5ca08b3f-6b98-11eb-98a0-782bcb0783e6 ONLINE
root@futurhome[~]# zpool import
root@futurhome[~]# zpool import 2797074274577963080
cannot import 'FH_R5_Vol1': I/O error
Recovery is possible, but will result in some data loss.
Returning the pool to its state as of Sat Jun 19 11:03:00 2021
should correct the problem. Approximately 19 seconds of data
must be discarded, irreversibly. After rewind, several
persistent user-data errors will remain. Recovery can be attempted
by executing 'zpool import -F FH_R5_Vol1'. A scrub of the pool
is strongly recommended after recovery.
root@futurhome[~]# zpool import -F FH_R5_Vol1
 
Joined
Jan 4, 2014
Messages
1,644
For readability, please surround your code block with [CODE] and [/CODE]. When adding code blocks to a post, you can use this menu option...

tn11.jpg
 

Min-3

Cadet
Joined
Jun 19, 2021
Messages
6
root@futurhome[~]# zpool import
pool: FH_R5_Vol1
id:
2797074274577963080
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

FH_R5_Vol1 ONLINE
gptid/5ca08b3f-6b98-11eb-98a0-782bcb0783e6 ONLINE
root@futurhome[~]# zpool import root@futurhome[~]# zpool import 2797074274577963080
cannot import 'FH_R5_Vol1': I/O error
Recovery is possible, but will result in some data loss.
Returning the pool to its state as of Sat Jun 19 11:03:00 2021
should correct the problem. Approximately 19 seconds of data
must be discarded, irreversibly. After rewind, several
persistent user-data errors will remain.
Recovery can be attempted
by executing 'zpool import -F FH_R5_Vol1'.
A scrub of the pool
is strongly recommended after recovery.
root@futurhome[~]# zpool import -F FH_R5_Vol1

Like that?
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
So at this point, it appears your pool has successfully imported, though since you're using hardware RAID, TrueNAS/ZFS has no way to correct the data errors that are now present in your pool. Next thing to do (as the message you quoted told you) is run zpool scrub FH_R5_Vol1. Even though this won't be able to fix any errors (since as far as ZFS is concerned, you have no redundancy), you'll at least know where they are.
 

Min-3

Cadet
Joined
Jun 19, 2021
Messages
6
I have done the scrub and I've been located the data's when I connect via filezila on my server FH_R5_Vol1 is at the root.
Thank you for your very helpfull help. I'll export the data of the users and iocage.
 
Top