SOLVED Replacement Install & Fail Pool Import

dcholth

Dabbler
Joined
Feb 14, 2017
Messages
12
Earlier this week I noticed that my server was not showing up online. I plugged a monitor into the system, rebooted, and got this message:
1641420264157.png


After some digging, I came across this post: https://www.truenas.com/community/threads/freenas-wont-boot.19476/ and decided the best course of action was to re-install TrueNAS and then reimport my data pool.

After a fresh installation on a USB device, I got the server up and running and went to import my pool again. I could see in the Storage/Pool section that my original pool "bigdata" was offline. Following the install/upgrade instructions, I exported the pool, keeping data and share configurations in tact, confirmed, and exported.

I then went to add and import an existing pool, and I get the following error when I attempt to import it:

Code:
Error: concurrent.futures.process._RemoteTraceback:
"""
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/concurrent/futures/process.py", line 243, in _process_worker
    r = call_item.fn(*call_item.args, **call_item.kwargs)
  File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 94, in main_worker
    res = MIDDLEWARE._run(*call_args)
  File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 45, in _run
    return self._call(name, serviceobj, methodobj, args, job=job)
  File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 39, in _call
    return methodobj(*params)
  File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 39, in _call
    return methodobj(*params)
  File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 979, in nf
    return f(*args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 371, in import_pool
    self.logger.error(
  File "libzfs.pyx", line 391, in libzfs.ZFS.__exit__
  File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 365, in import_pool
    zfs.import_pool(found, new_name or found.name, options, any_host=any_host)
  File "libzfs.pyx", line 1095, in libzfs.ZFS.import_pool
  File "libzfs.pyx", line 1123, in libzfs.ZFS.__import_pool
libzfs.ZFSException: I/O error
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 367, in run
    await self.future
  File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 403, in __run_body
    rv = await self.method(*([self] + args))
  File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 975, in nf
    return await f(*args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/pool.py", line 1421, in import_pool
    await self.middleware.call('zfs.pool.import_pool', pool['guid'], {
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1256, in call
    return await self._call(
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1221, in _call
    return await self._call_worker(name, *prepared_call.args)
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1227, in _call_worker
    return await self.run_in_proc(main_worker, name, args, job)
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1154, in run_in_proc
    return await self.run_in_executor(self.__procpool, method, *args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1128, in run_in_executor
    return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
libzfs.ZFSException: ('I/O error',)


I did find this thread that suggested to check the "zpool import", but I honestly can't figure out where that is.
 

dcholth

Dabbler
Joined
Feb 14, 2017
Messages
12
Also, to be clear, and I don't know if this is possible... I don't care at all about the underlining previous configurations, I just want to get my data back. I wasn't doing anything fancy in my previous configuration - a couple jails and shares, but thats it. I just want to get my data back, so if import it another way (directly from the disks?) is possible, thats totally cool.

I was likely a bit over my head when I went with TrueNAS :/
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702
It doesn't look great for you with that error message... possibly bad cabling to one or more of the disks, but let's have a look at zpool import and see what the pool looks like
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702

dcholth

Dabbler
Joined
Feb 14, 2017
Messages
12
All of the disks appear ONLINE

Not sure if this is helpful, but I looked into the zpool upgrade -v


Code:
root@freenas:~ # zpool upgrade -v
This system supports ZFS pool feature flags.

The following features are supported:

FEAT DESCRIPTION
-------------------------------------------------------------
async_destroy                         (read-only compatible)
     Destroy filesystems asynchronously.
empty_bpobj                           (read-only compatible)
     Snapshots use less space.
lz4_compress
     LZ4 compression algorithm support.
multi_vdev_crash_dump
     Crash dumps to multiple vdev pools.
spacemap_histogram                    (read-only compatible)
     Spacemaps maintain space histograms.
enabled_txg                           (read-only compatible)
     Record txg at which a feature is enabled
hole_birth
     Retain hole birth txg for more precise zfs send
extensible_dataset
     Enhanced dataset functionality, used by other features.
embedded_data
     Blocks which compress very well use even less space.
bookmarks                             (read-only compatible)
     "zfs bookmark" command
filesystem_limits                     (read-only compatible)
     Filesystem and snapshot limits.
large_blocks
     Support for blocks larger than 128KB.
large_dnode
     Variable on-disk size of dnodes.
sha512
     SHA-512/256 hash algorithm.
skein
     Skein hash algorithm.
userobj_accounting                    (read-only compatible)
     User/Group object accounting.
encryption
     Support for dataset level encryption
project_quota                         (read-only compatible)
     space/object accounting based on project ID.
device_removal
     Top-level vdevs can be removed, reducing logical pool size.
obsolete_counts                       (read-only compatible)
     Reduce memory used by removed devices when their blocks are freed or remapped.
zpool_checkpoint                      (read-only compatible)
     Pool state can be checkpointed, allowing rewind later.
spacemap_v2                           (read-only compatible)
     Space maps representing large segments are more efficient.
allocation_classes                    (read-only compatible)
     Support for separate allocation classes.
resilver_defer                        (read-only compatible)
     Support for deferring new resilvers when one is already running.
bookmark_v2
     Support for larger bookmarks
redaction_bookmarks
     Support for bookmarks which store redaction lists for zfs redacted send/recv.
redacted_datasets
     Support for redacted datasets, produced by receiving a redacted zfs send stream.
bookmark_written
     Additional accounting, enabling the written#<bookmark> property(space written since a bookmark), and estimates of send stream sizes for incrementals from bookmarks.
log_spacemap                          (read-only compatible)
     Log metaslab changes on a single spacemap and flush them periodically.
livelist                              (read-only compatible)
     Improved clone deletion performance.
device_rebuild                        (read-only compatible)
     Support for sequential device rebuilds
zstd_compress
     zstd compression algorithm support.

The following legacy versions are also supported:

VER  DESCRIPTION
---  --------------------------------------------------------
 1   Initial ZFS version
 2   Ditto blocks (replicated metadata)
 3   Hot spares and double parity RAID-Z
 4   zpool history
 5   Compression using the gzip algorithm
 6   bootfs pool property
 7   Separate intent log devices
 8   Delegated administration
 9   refquota and refreservation properties
 10  Cache devices
 11  Improved scrub performance
 12  Snapshot properties
 13  snapused property
 14  passthrough-x aclinherit
 15  user/group space accounting
 16  stmf property support
 17  Triple-parity RAID-Z
 18  Snapshot user holds
 19  Log device removal
 20  Compression using zle (zero-length encoding)
 21  Deduplication
 22  Received properties
 23  Slim ZIL
 24  System attributes
 25  Improved scrub stats
 

dcholth

Dabbler
Joined
Feb 14, 2017
Messages
12
I think my only option is to zpool import bigdata -f and hope for the best... I'm going to let this simmer a day or two to see if anyone else comes up with ideas, but getting a chance to really dig into the documentation, thats all I'm finding for potentially getting the data back.

Luckily, I have used backblaze to backup all the critical stuff.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702
I looked into the zpool upgrade -v
I would stay away from upgrading anything until you work out what's going on with your pool.

I think my only option is to zpool import bigdata -f and hope for the best
You need to switch it before giving the poolname, so :

zpool import -f bigdata

That may do something... you didn't show us the output from zpool import, so I can't comment on how likely you are to have success.
 

dcholth

Dabbler
Joined
Feb 14, 2017
Messages
12
@sretalla The zpool import command was posted above but I'll copy it here again.


Code:
root@freenas:~ # zpool import
   pool: bigdata
     id: 12543930625352282521
  state: ONLINE
status: Some supported features are not enabled on the pool.
 action: The pool can be imported using its name or numeric identifier, though
        some features will not be available without an explicit 'zpool upgrade'.
 config:

        bigdata                                         ONLINE
          raidz1-0                                      ONLINE
            gptid/126197b0-f163-11e6-b366-305a3a7e6a27  ONLINE
            gptid/1342a951-f163-11e6-b366-305a3a7e6a27  ONLINE
            gptid/1426258d-f163-11e6-b366-305a3a7e6a27  ONLINE


There are only 3 HDs in the system, so it looks to me like they're online and should be functioning...
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702
The zpool import command was posted above but I'll copy it here again.
Maybe I'm going blind, but I still don't see it in the posts above... I only see the output from zpool upgrade... never mind, I see it as a picture which was somehow obscuring my mind from processing it.

The command as already noted should work in that case.

In order to see it in the GUI, you will further need to export it (zpool export bigdata)and import from the GUI.
 

dcholth

Dabbler
Joined
Feb 14, 2017
Messages
12
Thanks @sretalla It appears like I've recovered all my data! Thanks especially for the last tip about getting it to show up in the GUI.

From the forced import it looks like something happened Jan/4 that caused it to go down.
 
Top