Can not import old pool after fresh install of trueNAS 12

zimon

Contributor
Joined
Jan 8, 2016
Messages
134
I was running FreeNAS 9.10 for 10 years on a HP MicroServer N36L Athlon2 1,3GHZ DualCore with 8GB of RAM and I decided to do a fresh install of trueNAS 12. (TrueNAS-12.1-MASTER-20210.)

After the installation I tried to import my old pool via WebGUI but I received the following error:


Code:
Error: concurrent.futures.process._RemoteTraceback:
"""
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/concurrent/futures/process.py", line 239, in _process_worker
    r = call_item.fn(*call_item.args, **call_item.kwargs)
  File "/usr/local/lib/python3.8/site-packages/middlewared/worker.py", line 94, in main_worker
    res = MIDDLEWARE._run(*call_args)
  File "/usr/local/lib/python3.8/site-packages/middlewared/worker.py", line 45, in _run
    return self._call(name, serviceobj, methodobj, args, job=job)
  File "/usr/local/lib/python3.8/site-packages/middlewared/worker.py", line 39, in _call
    return methodobj(*params)
  File "/usr/local/lib/python3.8/site-packages/middlewared/worker.py", line 39, in _call
    return methodobj(*params)
  File "/usr/local/lib/python3.8/site-packages/middlewared/schema.py", line 977, in nf
    return f(*args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/zfs.py", line 371, in import_pool
    self.logger.error(
  File "libzfs.pyx", line 391, in libzfs.ZFS.__exit__
  File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/zfs.py", line 365, in import_pool
    zfs.import_pool(found, new_name or found.name, options, any_host=any_host)
  File "libzfs.pyx", line 1095, in libzfs.ZFS.import_pool
  File "libzfs.pyx", line 1123, in libzfs.ZFS.__import_pool
libzfs.ZFSException: '/data/zfs' is not a valid directory
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/middlewared/job.py", line 367, in run
    await self.future
  File "/usr/local/lib/python3.8/site-packages/middlewared/job.py", line 403, in __run_body
    rv = await self.method(*([self] + args))
  File "/usr/local/lib/python3.8/site-packages/middlewared/schema.py", line 973, in nf
    return await f(*args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/pool.py", line 1411, in import_pool
    await self.middleware.call('zfs.pool.import_pool', pool['guid'], {
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1238, in call
    return await self._call(
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1203, in _call
    return await self._call_worker(name, *prepared_call.args)
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1209, in _call_worker
    return await self.run_in_proc(main_worker, name, args, job)
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1136, in run_in_proc
    return await self.run_in_executor(self.__procpool, method, *args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1110, in run_in_executor
    return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
libzfs.ZFSException: ("'/data/zfs' is not a valid directory",)


I tried to do a service middlewared restart and import after it, but without any success.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Try

Code:
zpool import
# copy pool name
zpool import -o altroot=/mnt <name>
zpool export <name>


Then try from the UI again.
 

zimon

Contributor
Joined
Jan 8, 2016
Messages
134
Well the experience on the GUI was different and I think I got one step further but I still got this error:
Code:
Error: concurrent.futures.process._RemoteTraceback:
"""
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/concurrent/futures/process.py", line 239, in _process_worker
    r = call_item.fn(*call_item.args, **call_item.kwargs)
  File "/usr/local/lib/python3.8/site-packages/middlewared/worker.py", line 94, in main_worker
    res = MIDDLEWARE._run(*call_args)
  File "/usr/local/lib/python3.8/site-packages/middlewared/worker.py", line 45, in _run
    return self._call(name, serviceobj, methodobj, args, job=job)
  File "/usr/local/lib/python3.8/site-packages/middlewared/worker.py", line 39, in _call
    return methodobj(*params)
  File "/usr/local/lib/python3.8/site-packages/middlewared/worker.py", line 39, in _call
    return methodobj(*params)
  File "/usr/local/lib/python3.8/site-packages/middlewared/schema.py", line 977, in nf
    return f(*args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/zfs.py", line 371, in import_pool
    self.logger.error(
  File "libzfs.pyx", line 391, in libzfs.ZFS.__exit__
  File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/zfs.py", line 365, in import_pool
    zfs.import_pool(found, new_name or found.name, options, any_host=any_host)
  File "libzfs.pyx", line 1095, in libzfs.ZFS.import_pool
  File "libzfs.pyx", line 1123, in libzfs.ZFS.__import_pool
libzfs.ZFSException: '/data/zfs' is not a valid directory
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/middlewared/job.py", line 367, in run
    await self.future
  File "/usr/local/lib/python3.8/site-packages/middlewared/job.py", line 403, in __run_body
    rv = await self.method(*([self] + args))
  File "/usr/local/lib/python3.8/site-packages/middlewared/schema.py", line 973, in nf
    return await f(*args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/pool.py", line 1411, in import_pool
    await self.middleware.call('zfs.pool.import_pool', pool['guid'], {
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1238, in call
    return await self._call(
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1203, in _call
    return await self._call_worker(name, *prepared_call.args)
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1209, in _call_worker
    return await self.run_in_proc(main_worker, name, args, job)
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1136, in run_in_proc
    return await self.run_in_executor(self.__procpool, method, *args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1110, in run_in_executor
    return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
libzfs.ZFSException: ("'/data/zfs' is not a valid directory",)
 

zimon

Contributor
Joined
Jan 8, 2016
Messages
134
This was the output of the zpool import:

Code:
root@truenas[~]# zpool import
   pool: ZFSPool
     id: 6111874202152635090
  state: ONLINE
status: Some supported features are not enabled on the pool.
 action: The pool can be imported using its name or numeric identifier, though
    some features will not be available without an explicit 'zpool upgrade'.
 config:

    ZFSPool                                         ONLINE
      mirror-0                                      ONLINE
        gptid/974aaf31-e0aa-11ec-a645-3cd92b02910c  ONLINE
        gptid/bc1814a4-10fd-11e8-aabb-3cd92b02910c  ONLINE
      mirror-1                                      ONLINE
        gptid/2647ce4b-ba1c-11e5-803d-3cd92b02910c  ONLINE
        gptid/27200e0e-ba1c-11e5-803d-3cd92b02910c  ONLINE
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
And the
Code:
 zpool import -o altroot=/mnt ZFSPool
 zpool export ZFSPool


How did that go?
 

zimon

Contributor
Joined
Jan 8, 2016
Messages
134
And the
Code:
 zpool import -o altroot=/mnt ZFSPool
 zpool export ZFSPool


How did that go?
there was no output this is the whole thing:

Code:
root@truenas[~]# zpool import
   pool: ZFSPool
     id: 6111874202152635090
  state: ONLINE
status: Some supported features are not enabled on the pool.
 action: The pool can be imported using its name or numeric identifier, though
    some features will not be available without an explicit 'zpool upgrade'.
 config:

    ZFSPool                                         ONLINE
      mirror-0                                      ONLINE
        gptid/974aaf31-e0aa-11ec-a645-3cd92b02910c  ONLINE
        gptid/bc1814a4-10fd-11e8-aabb-3cd92b02910c  ONLINE
      mirror-1                                      ONLINE
        gptid/2647ce4b-ba1c-11e5-803d-3cd92b02910c  ONLINE
        gptid/27200e0e-ba1c-11e5-803d-3cd92b02910c  ONLINE
root@truenas[~]# zpool import -o altroot=/mnt ZFSPool
root@truenas[~]# zpool export ZFSPool
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Ok, let's try
Code:
zpool import -o altroot=/mnt ZFSPool
zfs set mountpoint=/mnt/ZFSPool ZFSPool
zpool export ZFSPool


Then the UI again.
 

zimon

Contributor
Joined
Jan 8, 2016
Messages
134
Ok, let's try
Code:
zpool import -o altroot=/mnt ZFSPool
zfs set mountpoint=/mnt/ZFSPool ZFSPool
zpool export ZFSPool


Then the UI again.
hmm the second one gave me an error:

root@truenas[~]# zpool import -o altroot=/mnt ZFSPool
root@truenas[~]# zfs set mountpoint=/mnt/ZFSPool ZFSPool
cannot unmount '/mnt/ZFSPool/jails_2': unmount failed
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
hmm the second one gave me an error:
Code:
root@truenas[~]# zpool import -o altroot=/mnt ZFSPool
root@truenas[~]# zfs set mountpoint=/mnt/ZFSPool ZFSPool
cannot unmount '/mnt/ZFSPool/jails_2': unmount failed

Weird. Please with the pool still imported:
Code:
zfs get -r mountpoint ZFSPool
zfs unmount -f ZFSPool/jails_2


And then retry:
Code:
zfs set mountpoint=/mnt/ZFSPool ZFSPool
zfs export ZFSPool
 

zimon

Contributor
Joined
Jan 8, 2016
Messages
134
now I got not error after setting the mountpoint,
however when I tried to import via GUI I got this again:
Error: concurrent.futures.process._RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/local/lib/python3.8/concurrent/futures/process.py", line 239, in _process_worker
r = call_item.fn(*call_item.args, **call_item.kwargs)
File "/usr/local/lib/python3.8/site-packages/middlewared/worker.py", line 94, in main_worker
res = MIDDLEWARE._run(*call_args)
File "/usr/local/lib/python3.8/site-packages/middlewared/worker.py", line 45, in _run
return self._call(name, serviceobj, methodobj, args, job=job)
File "/usr/local/lib/python3.8/site-packages/middlewared/worker.py", line 39, in _call
return methodobj(*params)
File "/usr/local/lib/python3.8/site-packages/middlewared/worker.py", line 39, in _call
return methodobj(*params)
File "/usr/local/lib/python3.8/site-packages/middlewared/schema.py", line 977, in nf
return f(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/zfs.py", line 371, in import_pool
self.logger.error(
File "libzfs.pyx", line 391, in libzfs.ZFS.__exit__
File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/zfs.py", line 365, in import_pool
zfs.import_pool(found, new_name or found.name, options, any_host=any_host)
File "libzfs.pyx", line 1095, in libzfs.ZFS.import_pool
File "libzfs.pyx", line 1123, in libzfs.ZFS.__import_pool
libzfs.ZFSException: '/data/zfs' is not a valid directory
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/middlewared/job.py", line 367, in run
await self.future
File "/usr/local/lib/python3.8/site-packages/middlewared/job.py", line 403, in __run_body
rv = await self.method(*([self] + args))
File "/usr/local/lib/python3.8/site-packages/middlewared/schema.py", line 973, in nf
return await f(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/pool.py", line 1411, in import_pool
await self.middleware.call('zfs.pool.import_pool', pool['guid'], {
File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1238, in call
return await self._call(
File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1203, in _call
return await self._call_worker(name, *prepared_call.args)
File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1209, in _call_worker
return await self.run_in_proc(main_worker, name, args, job)
File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1136, in run_in_proc
return await self.run_in_executor(self.__procpool, method, *args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1110, in run_in_executor
return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
libzfs.ZFSException: ("'/data/zfs' is not a valid directory",)
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Some dataset seems to have a mountpoint of /data/zfs set. So please import in the CLI again and post the output of
Code:
zfs get -r mountpoint ZFSPool

as I asked in my last message. Thanks.
 

zimon

Contributor
Joined
Jan 8, 2016
Messages
134
Some dataset seems to have a mountpoint of /data/zfs set. So please import in the CLI again and post the output of
Code:
zfs get -r mountpoint ZFSPool

as I asked in my last message. Thanks.
sorry was not sure if I should post it.


Code:
root@truenas[~]# zfs get -r mountpoint ZFSPool
NAME                                                      PROPERTY    VALUE                                             SOURCE
ZFSPool                                                   mountpoint  /mnt/ZFSPool                                      default
ZFSPool/.system                                           mountpoint  legacy                                            local
ZFSPool/.system/configs-e2eccb3703ad46d2b19f2e4809443384  mountpoint  legacy                                            inherited from ZFSPool/.system
ZFSPool/.system/configs-ea02119b0df4495ba64ec1dbdd61ed06  mountpoint  legacy                                            inherited from ZFSPool/.system
ZFSPool/.system/cores                                     mountpoint  legacy                                            inherited from ZFSPool/.system
ZFSPool/.system/rrd-e2eccb3703ad46d2b19f2e4809443384      mountpoint  legacy                                            inherited from ZFSPool/.system
ZFSPool/.system/rrd-ea02119b0df4495ba64ec1dbdd61ed06      mountpoint  legacy                                            inherited from ZFSPool/.system
ZFSPool/.system/samba4                                    mountpoint  legacy                                            inherited from ZFSPool/.system
ZFSPool/.system/syslog-e2eccb3703ad46d2b19f2e4809443384   mountpoint  legacy                                            inherited from ZFSPool/.system
ZFSPool/.system/syslog-ea02119b0df4495ba64ec1dbdd61ed06   mountpoint  legacy                                            inherited from ZFSPool/.system
ZFSPool/BTSync-Photos                                     mountpoint  /mnt/ZFSPool/BTSync-Photos                        default
ZFSPool/Backup                                            mountpoint  /mnt/ZFSPool/Backup                               default
ZFSPool/Downloads                                         mountpoint  /mnt/ZFSPool/Downloads                            default
ZFSPool/Downloads-Alt                                     mountpoint  /mnt/ZFSPool/Downloads-Alt                        default
ZFSPool/Media-Music                                       mountpoint  /mnt/ZFSPool/Media-Music                          default
ZFSPool/Media-Photo                                       mountpoint  /mnt/ZFSPool/Media-Photo                          default
ZFSPool/Media-Video                                       mountpoint  /mnt/ZFSPool/Media-Video                          default
ZFSPool/SonjaBackup                                       mountpoint  /mnt/ZFSPool/SonjaBackup                          default
ZFSPool/TorrentFiles                                      mountpoint  /mnt/ZFSPool/TorrentFiles                         default
ZFSPool/jails                                             mountpoint  /mnt/ZFSPool/jails                                default
ZFSPool/jails/.warden-template-pluginjail                 mountpoint  /mnt/ZFSPool/jails/.warden-template-pluginjail    default
ZFSPool/jails/.warden-template-pluginjail@clean           mountpoint  -                                                 -
ZFSPool/jails/btsync_1                                    mountpoint  /mnt/ZFSPool/jails/btsync_1                       default
ZFSPool/jails/couchpotato_1                               mountpoint  /mnt/ZFSPool/jails/couchpotato_1                  default
ZFSPool/jails/plexmediaserver_1                           mountpoint  /mnt/ZFSPool/jails/plexmediaserver_1              default
ZFSPool/jails/sickbeard_1                                 mountpoint  /mnt/ZFSPool/jails/sickbeard_1                    default
ZFSPool/jails/sickrage_1                                  mountpoint  /mnt/ZFSPool/jails/sickrage_1                     default
ZFSPool/jails/transmission_1                              mountpoint  /mnt/ZFSPool/jails/transmission_1                 default
ZFSPool/jails/transmission_1@auto-20170124.1201-2w        mountpoint  -                                                 -
ZFSPool/jails/transmission_1@auto-20170124.1301-2w        mountpoint  -                                                 -
ZFSPool/jails/transmission_1@auto-20170124.1401-2w        mountpoint  -                                                 -
ZFSPool/jails/transmission_1@auto-20170124.1501-2w        mountpoint  -                                                 -
ZFSPool/jails/transmission_1@auto-20170124.1601-2w        mountpoint  -                                                 -
ZFSPool/jails/transmission_1@auto-20170124.1701-2w        mountpoint  -                                                 -
ZFSPool/jails_2                                           mountpoint  /mnt/ZFSPool/jails_2                              default
ZFSPool/jails_2/.warden-template-pluginjail               mountpoint  /mnt/ZFSPool/jails_2/.warden-template-pluginjail  local
ZFSPool/jails_2/.warden-template-pluginjail@clean         mountpoint  -                                                 -
ZFSPool/jails_2/couchpotato_1                             mountpoint  /mnt/ZFSPool/jails_2/couchpotato_1                default
ZFSPool/jails_2/plexmediaserver_1                         mountpoint  /mnt/ZFSPool/jails_2/plexmediaserver_1            default
ZFSPool/jails_2/resilio_1                                 mountpoint  /mnt/ZFSPool/jails_2/resilio_1                    default
ZFSPool/jails_2/sickrage_1                                mountpoint  /mnt/ZFSPool/jails_2/sickrage_1                   default
ZFSPool/jails_2/transmission_1                            mountpoint  /mnt/ZFSPool/jails_2/transmission_1               default
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
OK ... this is probably not about your pool. /data/zfs seems to be the location where TrueNAS stores the zpool caches. Why it does not exist on your system - no idea. But here on mine it's a plain directory on the boot pool.

So export again if you have not done so already, and use this command: mkdir -p /data/zfs before trying again to import from the UI.

BTW: if this finally works out I would go straight to TN 13. No reason to start with EOL software if you are doing a fresh install, anyway.

Fingers crossed ...
Patrick
 

zimon

Contributor
Joined
Jan 8, 2016
Messages
134
yes! it worked. thank you so much!
The reason that I installed TN12 was that I only have 8GB of ram and I was under the impression that TN13 requires more.

I also got the message that I should upgrade my pool... not sure if I should though..
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Only if you are confident you will stay on your current version. But you are not going back to 11, are you?

Seriously, TN 13 will not require significantly more memory than TN 12. 8 G is tight for both but will probably work.
So, I'd recommend upgrading your pool for 12, then trying the upgrade to 13 but not upgrade your pool for that, so you can roll back.

HTH,
Patrick
 

zimon

Contributor
Joined
Jan 8, 2016
Messages
134
ok I will do that.
Is there a way to upgrade to 13 via WebGUI or do I need to boot from an installer?
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Web UI - easy peasy.
 

zimon

Contributor
Joined
Jan 8, 2016
Messages
134
Web UI - easy peasy.
Sorry but I must be doing something wrong... when I go to Updates there are no updates available and I also can not change the train.

Current Train: TrueNAS-12.1-Nightlies - Release Train for TrueNAS 13.0 [release]
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
That's weird. Changing trains and upgrading should be possible. Possibly that's because you installed a 12.1 nightly instead of a 12.0 release. Make sure you pick a release image for 13, preferrably 13.0-U5.
 

zimon

Contributor
Joined
Jan 8, 2016
Messages
134
so the manual update via WebGUI stopped at 67% and nothing happened... so I guess I have to do the old "boot from usb stick" way..
 
Top