Error importing pool

Joined
Oct 25, 2021
Messages
9
When I try to import my ZFS pool I get the following error:

Code:
Error: concurrent.futures.process._RemoteTraceback:
"""
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/concurrent/futures/process.py", line 243, in _process_worker
    r = call_item.fn(*call_item.args, **call_item.kwargs)
  File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 94, in main_worker
    res = MIDDLEWARE._run(*call_args)
  File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 45, in _run
    return self._call(name, serviceobj, methodobj, args, job=job)
  File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 39, in _call
    return methodobj(*params)
  File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 39, in _call
    return methodobj(*params)
  File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 977, in nf
    return f(*args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 371, in import_pool
    self.logger.error(
  File "libzfs.pyx", line 391, in libzfs.ZFS.__exit__
  File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 365, in import_pool
    zfs.import_pool(found, new_name or found.name, options, any_host=any_host)
  File "libzfs.pyx", line 1095, in libzfs.ZFS.import_pool
  File "libzfs.pyx", line 1123, in libzfs.ZFS.__import_pool
libzfs.ZFSException: I/O error
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 367, in run
    await self.future
  File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 403, in __run_body
    rv = await self.method(*([self] + args))
  File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 973, in nf
    return await f(*args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/pool.py", line 1421, in import_pool
    await self.middleware.call('zfs.pool.import_pool', pool['guid'], {
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1248, in call
    return await self._call(
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1213, in _call
    return await self._call_worker(name, *prepared_call.args)
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1219, in _call_worker
    return await self.run_in_proc(main_worker, name, args, job)
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1146, in run_in_proc
    return await self.run_in_executor(self.__procpool, method, *args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1120, in run_in_executor
    return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
libzfs.ZFSException: ('I/O error',)


zpool history:

Code:
the configuration database and will be reset on reboot.

root@truenas[~]# zpool history
History for 'boot-pool':
2021-09-29.01:47:45 zpool create -f -o cachefile=/tmp/zpool.cache -O mountpoint=none -O atime=off -O canmount=off boot-pool ada2p2
2021-09-29.01:47:45 zfs set compression=on boot-pool
2021-09-29.01:47:45 zfs create -o canmount=off boot-pool/ROOT
2021-09-29.01:47:48 zfs create -o mountpoint=legacy boot-pool/ROOT/default
2021-09-29.01:48:38 zpool set bootfs=boot-pool/ROOT/default boot-pool
2021-09-29.01:49:27 zfs set beadm:nickname=default boot-pool/ROOT/default
2021-09-29.01:49:27 zfs snapshot -r boot-pool/ROOT/default@2021-09-29-05:49:27
2021-09-29.01:49:27 zfs clone -o canmount=off -o mountpoint=legacy boot-pool/ROOT/default@2021-09-29-05:49:27 boot-pool/ROOT/Initial-Install
2021-09-29.01:49:32 zfs set beadm:keep=True boot-pool/ROOT/Initial-Install
2021-09-29.01:50:11  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system
2021-09-29.01:50:11  zfs create -o mountpoint=legacy -o readonly=off -o quota=1G boot-pool/.system/cores
2021-09-29.01:50:11  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system/samba4
2021-09-29.01:50:11  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system/syslog-6db47e476a87416dae61f74d2047f68f
2021-09-29.01:50:11  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system/rrd-6db47e476a87416dae61f74d2047f68f
2021-09-29.01:50:11  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system/configs-6db47e476a87416dae61f74d2047f68f
2021-09-29.01:50:12  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system/webui
2021-09-29.01:50:12  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system/services
2021-09-29.04:31:16 zfs destroy -r boot-pool/.system
2021-09-29.04:42:43 zfs set beadm:nickname=Initial-Install boot-pool/ROOT/Initial-Install
2021-09-29.04:46:01  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system
2021-09-29.04:46:01  zfs create -o mountpoint=legacy -o readonly=off -o quota=1G boot-pool/.system/cores
2021-09-29.04:46:01  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system/samba4
2021-09-29.04:46:01  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system/syslog-6db47e476a87416dae61f74d2047f68f
2021-09-29.04:46:01  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system/rrd-6db47e476a87416dae61f74d2047f68f
2021-09-29.04:46:01  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system/configs-6db47e476a87416dae61f74d2047f68f
2021-09-29.04:46:01  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system/webui
2021-09-29.04:46:02  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system/services
2021-09-29.07:50:02 zfs destroy -r boot-pool/.system
2021-09-29.07:53:36  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system
2021-09-29.07:53:36  zfs create -o mountpoint=legacy -o readonly=off -o quota=1G boot-pool/.system/cores
2021-09-29.07:53:36  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system/samba4
2021-09-29.07:53:36  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system/syslog-6db47e476a87416dae61f74d2047f68f
2021-09-29.07:53:36  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system/rrd-6db47e476a87416dae61f74d2047f68f
2021-09-29.07:53:36  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system/configs-6db47e476a87416dae61f74d2047f68f
2021-09-29.07:53:36  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system/webui
2021-09-29.07:53:38  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system/services
2021-09-29.11:26:02 zfs destroy -r boot-pool/.system
2021-09-29.11:34:53  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system
2021-09-29.11:34:53  zfs create -o mountpoint=legacy -o readonly=off -o quota=1G boot-pool/.system/cores
2021-09-29.11:34:53  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system/samba4
2021-09-29.11:34:53  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system/syslog-6db47e476a87416dae61f74d2047f68f
2021-09-29.11:34:53  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system/rrd-6db47e476a87416dae61f74d2047f68f
2021-09-29.11:34:53  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system/configs-6db47e476a87416dae61f74d2047f68f
2021-09-29.11:34:53  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system/webui
2021-09-29.11:34:54  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system/services
2021-10-15.08:12:45 zfs destroy -r boot-pool/.system
2021-10-15.08:17:02  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system
2021-10-15.08:17:02  zfs create -o mountpoint=legacy -o readonly=off -o quota=1G boot-pool/.system/cores
2021-10-15.08:17:02  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system/samba4
2021-10-15.08:17:02  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system/syslog-6db47e476a87416dae61f74d2047f68f
2021-10-15.08:17:02  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system/rrd-6db47e476a87416dae61f74d2047f68f
2021-10-15.08:17:02  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system/configs-6db47e476a87416dae61f74d2047f68f
2021-10-15.08:17:02  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system/webui
2021-10-15.08:17:04  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system/services
2021-10-15.08:37:25 zfs destroy -r boot-pool/.system
2021-10-15.09:07:24 zfs set org.freebsd.ioc:active=no boot-pool
2021-10-16.03:45:06  zpool scrub boot-pool
2021-10-20.00:54:14  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system
2021-10-20.00:54:14  zfs create -o mountpoint=legacy -o readonly=off -o quota=1G boot-pool/.system/cores
2021-10-20.00:54:14  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system/samba4
2021-10-20.00:54:14  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system/syslog-6db47e476a87416dae61f74d2047f68f
2021-10-20.00:54:14  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system/rrd-6db47e476a87416dae61f74d2047f68f
2021-10-20.00:54:14  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system/configs-6db47e476a87416dae61f74d2047f68f
2021-10-20.00:54:14  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system/webui
2021-10-20.00:54:18  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system/services
2021-10-22.00:03:56 zfs destroy -r boot-pool/.system
2021-10-23.03:45:06  zpool scrub boot-pool
2021-10-25.02:40:42  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system
2021-10-25.02:40:42  zfs create -o mountpoint=legacy -o readonly=off -o quota=1G boot-pool/.system/cores
2021-10-25.02:40:42  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system/samba4
2021-10-25.02:40:42  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system/syslog-6db47e476a87416dae61f74d2047f68f
2021-10-25.02:40:42  zfs create -o mountpoint=legacy -o readonly=off boot-pool/.system/rrd-6db47e4


Not sure what this means as all 5 HDDs in my raidz2 pass S.M.A.R.T.

Before I got these I got a bunch of errors about "vm_fault: pager read error" and checksum errors and df_complext-reserved errors. Then the TrueNAS VM stopped working so I did a restart and I can't import the pool.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
let's have a look at zpool import and dmesg | grep -i cam
 
Joined
Oct 25, 2021
Messages
9
Sorry for the late reply but I had to send my motherboard for warranty services due to bad RAM slots. Finally got my system up and running again.

Code:
root@truenas[~]# zpool import
   pool: zfs
     id: 10234708749956824445
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

        zfs                                             ONLINE
          raidz2-0                                      ONLINE
            vtbd3p2                                     ONLINE
            vtbd4p2                                     ONLINE
            gptid/9630c0b0-20ff-11ec-8d39-2bbba9ae6bee  ONLINE
            vtbd1p2                                     ONLINE
            vtbd2p2                                     ONLINE

root@truenas[~]# dmesg | grep -i cam
  Origin="AuthenticAMD"  Id=0x830f10  Family=0x17  Model=0x31  Stepping=0
Root mount waiting for: CAM usbus0
  Origin="AuthenticAMD"  Id=0x830f10  Family=0x17  Model=0x31  Stepping=0
root@truenas[~]#
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
That prompts me to ask if you're running TrueNAS in a Virtual Machine... please give more details about your hardware.
 
Joined
Oct 25, 2021
Messages
9
Yes I am. I have an unRAID server 6.9.2 with TrueNAS in a VM for now. The host machine is an AMD Threadripper PRO with 128GB RAM. The 5 HDDs that make up the pool are attached to a SAS2 LSI HBA card that is passed through to the VM.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
I wouldn't normally expect to see disks referenced like vtbdxx if they are attached via HBA directly in the machine... that smells of Virtual disks to me.

Fair warning that it looks to me that your setup is not OK.

If you're satisfied it's what you want and have read this: https://www.truenas.com/community/t...ative-for-those-seeking-virtualization.26095/ and this: https://www.truenas.com/community/t...ide-to-not-completely-losing-your-data.12714/ it's your data and your choice.

It looks to me from those outputs that the pool might just be importable with zpool import zfs

If that were to succeed, zpool export zfs and then try again from the GUI.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
If you try it again and then run the dmesg again we may see something more interesting about where the error actually is.
 
Joined
Oct 25, 2021
Messages
9
Code:
root@truenas[~]# zpool import zfs
cannot import 'zfs': I/O error
        Destroy and re-create the pool from
        a backup source.
root@truenas[~]# dmesg | grep -i cam
  Origin="AuthenticAMD"  Id=0x830f10  Family=0x17  Model=0x31  Stepping=0
Root mount waiting for: CAM usbus0
  Origin="AuthenticAMD"  Id=0x830f10  Family=0x17  Model=0x31  Stepping=0
root@truenas[~]#


Code:
root@truenas[~]# zpool import
   pool: zfs
     id: 10234708749956824445
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

        zfs                                             ONLINE
          raidz2-0                                      ONLINE
            gptid/95ec0cc1-20ff-11ec-8d39-2bbba9ae6bee  ONLINE
            gptid/962c3692-20ff-11ec-8d39-2bbba9ae6bee  ONLINE
            gptid/9630c0b0-20ff-11ec-8d39-2bbba9ae6bee  ONLINE
            gptid/962e3326-20ff-11ec-8d39-2bbba9ae6bee  ONLINE
            gptid/963454ab-20ff-11ec-8d39-2bbba9ae6bee  ONLINE
root@truenas[~]#
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Try zpool import 10234708749956824445

I wonder if you're using a reserved word as your pool name and that's causing the issue.
 
Joined
Oct 25, 2021
Messages
9
The HBA card is stubbed as recommended but I still get all the same errors:

Code:
Last login: Thu Dec 16 09:53:14 on pts/1
FreeBSD 12.2-RELEASE-p11 75566f060d4(HEAD) TRUENAS

        TrueNAS (c) 2009-2021, iXsystems, Inc.
        All rights reserved.
        TrueNAS code is released under the modified BSD license with some
        files copyrighted by (c) iXsystems, Inc.

        For more information, documentation, help or support, go here:
        http://truenas.com
Welcome to TrueNAS

Warning: settings changed through the CLI are not written to
the configuration database and will be reset on reboot.

root@truenas[~]# zpool import
   pool: zfs
     id: 10234708749956824445
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

        zfs                                             ONLINE
          raidz2-0                                      ONLINE
            gptid/95ec0cc1-20ff-11ec-8d39-2bbba9ae6bee  ONLINE
            gptid/962c3692-20ff-11ec-8d39-2bbba9ae6bee  ONLINE
            gptid/9630c0b0-20ff-11ec-8d39-2bbba9ae6bee  ONLINE
            gptid/962e3326-20ff-11ec-8d39-2bbba9ae6bee  ONLINE
            gptid/963454ab-20ff-11ec-8d39-2bbba9ae6bee  ONLINE
root@truenas[~]# zpool import 10234708749956824445
cannot import 'zfs': I/O error
        Destroy and re-create the pool from
        a backup source.
root@truenas[~]# dmesg | grep -i cam
  Origin="AuthenticAMD"  Id=0x830f10  Family=0x17  Model=0x31  Stepping=0
Root mount waiting for: CAM usbus0
  Origin="AuthenticAMD"  Id=0x830f10  Family=0x17  Model=0x31  Stepping=0
root@truenas[~]#
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
I'm still not clear on your setup and why those disks were showing up with odd names.

I think you need to consider running on bare-metal to test if your pool can be imported and then go back to virtualizing only when you know your pool is good.
 
Joined
Oct 25, 2021
Messages
9
Nope.

Code:
FreeBSD 12.2-RELEASE-p11 75566f060d4(HEAD) TRUENAS

        TrueNAS (c) 2009-2021, iXsystems, Inc.
        All rights reserved.
        TrueNAS code is released under the modified BSD license with some
        files copyrighted by (c) iXsystems, Inc.

        For more information, documentation, help or support, go here:
        http://truenas.com
Welcome to TrueNAS

Warning: settings changed through the CLI are not written to
the configuration database and will be reset on reboot.

root@truenas[~]# zpool import zfs
cannot import 'zfs': I/O error
        Destroy and re-create the pool from
        a backup source.
root@truenas[~]# dmesg | grep -i cam
  Origin="AuthenticAMD"  Id=0x830f10  Family=0x17  Model=0x31  Stepping=0
Root mount waiting for: CAM usbus1 usbus2 usbus3
Root mount waiting for: CAM usbus1 usbus2
Root mount waiting for: CAM usbus1 usbus2
Root mount waiting for: CAM
Root mount waiting for: CAM
Root mount waiting for: CAM
Root mount waiting for: CAM
Root mount waiting for: CAM
  Origin="AuthenticAMD"  Id=0x830f10  Family=0x17  Model=0x31  Stepping=0
root@truenas[~]# zpool import
   pool: zfsbackupone
     id: 3397902790626489819
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

        zfsbackupone                                    ONLINE
          mirror-0                                      ONLINE
            gpt/zfs-73488e3abd91f008                    ONLINE
            gptid/f6940dff-f68c-b444-bef0-5c6abe21e583  ONLINE

   pool: zfs
     id: 10234708749956824445
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

        zfs                                             ONLINE
          raidz2-0                                      ONLINE
            gptid/95ec0cc1-20ff-11ec-8d39-2bbba9ae6bee  ONLINE
            gptid/962c3692-20ff-11ec-8d39-2bbba9ae6bee  ONLINE
            gptid/9630c0b0-20ff-11ec-8d39-2bbba9ae6bee  ONLINE
            gptid/962e3326-20ff-11ec-8d39-2bbba9ae6bee  ONLINE
            gptid/963454ab-20ff-11ec-8d39-2bbba9ae6bee  ONLINE
root@truenas[~]#
 
Joined
Oct 25, 2021
Messages
9
Any ideas? Still can't import the pool on bare-metal.

The data is there as confirmed by a few ZFS recovery software but unless I buy the software I can't recovery it.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
OK, a few things here...

If the fact that there's an I/O error preventing the pool from importing and SMART is all clean (as you reported, but we haven't verified) and CAM isn't logging anything about failing SCSI/ATA commands, maybe there's something else we're missing in dmesg, so let's have a look at it this way:

dmesg | tail -n 50

The data is there as confirmed by a few ZFS recovery software but unless I buy the software I can't recovery it.
I guess you're talking about klennet.

There's also an "open source" option, which may or may not provide some options to recover:

https://github.com/Stefan311/ZfsSpy
 
Last edited:
Top