Error exporting/disconnecting pools on TrueNAS 12.0

thalf

Dabbler
Joined
Mar 1, 2014
Messages
19
Hi, I don't know if this is a bug or if I'm doing something weird, but anyways I'm having problems exporting/disconnecting pools. Basically, I can't do it.

This is what I do:
1. Under Storage -> Pools I select Add and then "Create new pool" and click "CREATE POOL".
2. In the Pool Manager I give the new pool a unique name, add two disks (mirror raid), and add both disks as Data VDevs. I then click "CREATE".
3. The pool is created and I can see it under Storage -> Pools.
4. Under Storage -> Pools I click the settings icon for my new pool, and select "Export/Disconnect".
5. I get a pop-up message saying "Error exporting/disconnecting pool. no path specified" and when I click "More info..." I get this:
Code:
Error: Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 137, in call_method
    result = await self.middleware._call(message['method'], serviceobj, methodobj, params, app=self,
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1195, in _call
    return await methodobj(*prepared_call.args)
  File "/usr/local/lib/python3.8/site-packages/middlewared/schema.py", line 973, in nf
    return await f(*args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/pool.py", line 1634, in attachments
    return await self.middleware.call('pool.dataset.attachments_with_path', pool['path'])
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1238, in call
    return await self._call(
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1195, in _call
    return await methodobj(*prepared_call.args)
  File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/pool.py", line 3714, in attachments_with_path
    for attachment in await delegate.query(path, True):
  File "/usr/local/lib/python3.8/site-packages/middlewared/common/attachment/__init__.py", line 97, in query
    if await self.is_child_of_path(resource, path):
  File "/usr/local/lib/python3.8/site-packages/middlewared/common/attachment/__init__.py", line 132, in is_child_of_path
    return is_child(resource[self.path_field], path)
  File "/usr/local/lib/python3.8/site-packages/middlewared/utils/path.py", line 11, in is_child
    rel = os.path.relpath(child, parent)
  File "/usr/local/lib/python3.8/posixpath.py", line 453, in relpath
    raise ValueError("no path specified")
ValueError: no path specified

I guess this is a bug, or have I missed some step? Even if I've missed something I think the GUI should handle the situation with more grace.

I'm running 12.0-U1 and the release notes for 12.0-U1.1 doesn't mention a fix for this.

Should I try to reinstall some python lib? If so, which one?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
I tested the same thing on a pool under 12.0-U1 and was able to export the pool without issue. I don't think it's a bug.

Consider a reinstall.
 
Joined
Jan 18, 2017
Messages
525
I just attempted this on a VM of 12.0-U1 exported a single disk pool destroying data and shares, created a mirrored new pool and exported it as well. Only pool on that VM so the system dataset was on the pool that was destroyed and the newly created one afterwards.
 

thalf

Dabbler
Joined
Mar 1, 2014
Messages
19
OK, thanks. Is there some repair functionality, or an easy way to just try to reinstall some packages (e.g. python)? Reinstalling feels drastic, even though yes I can save the config and restore it after reinstall. If a package is acting weird on Debian I can just do a "apt reinstall PACKAGENAME" and see if that solves the problem.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
You can't run pkg on the host, only in a jail.

Backup config, install fresh, restore config should be simple and quick if you haven't done a lot of crazy customization to the system.

Although you should perhaps consider how it is that your boot pool has become untrustworthy. Maybe a bad disk?
 

thalf

Dabbler
Joined
Mar 1, 2014
Messages
19
There are still problems.

I first saved the config, including encryption keys etc..

Then I downloaded the latest image from truenas.com (12.0-U1.1), put it on a USB stick, and installed it to a new USB stick. Booted it, logged in to the web GUI, went to System -> General and chose "UPLOAD CONFIG". Uploaded the saved config, and the system rebooted.

Now I can see and mount my old pools, user accounts seem to be as they should, and so on.

HOWEVER, going to Storage -> Pools, clicking the config icon next to the new pool and choosing export/disconnect, I immediately get the same "Error exporting/disconnecting pool. no path specified" popup and I get the exact same trace as above when I select "More info...".

What the &^#% do I do now when not even a reinstall helps? I guess there's something weird in my config, but I don't know how to debug that since it is, as far as I know, a SQLite database.

I've used FreeNAS since 2014 and updated a few times, always carrying the config forward to the new versions, so whatever is wrong with the config, as far as I can tell, has been introduced sometime by FreeNAS and now TrueNAS barfs on it.

I haven't had any warnings about bad disks which could have caused the boot pool to become untrustworthy.

Any suggestions? Having to do a sqlite3 /data/freenas-v1.db feels very unappealing but if someone can guide me through it...
 
Last edited:

thalf

Dabbler
Joined
Mar 1, 2014
Messages
19
Hmmm, I might have found something.

I have six disk bays in my NAS chassis (Fractal Design Node 304). Up until now I've used six 4TB disks in pairs as pools (three mirrors, called volume0, volume1, and volume2), but I'm now working to move to just using two (18TB) disks.

So I've removed two disks to have room to start using the two new disks. Those two disks made up volume0, and now I see under System -> System Dataset that volume0 is the System Dataset Pool.

Could that be causing this? The system seems to work fine in all other regards.

Can I just switch the System Dataset Pool to volume1 or volume2 in the web GUI until I have my new 18TB disks set up and can move the System Dataset Pool there, or do I need to shutdown, install the two volume0 disks instead of the volume1 or volume2 disks, boot, and then move the System Dataset Pool from volume0 to somewhere that is not volume0?

(I want to move the data on volume0 to the new 18TB disks last, after moving volume1 and volume2, which is why I chose to remove those disks to make room for the new disks.)
 
Joined
Jan 18, 2017
Messages
525
So I've removed two disks to have room to start using the two new disks. Those two disks made up volume0, and now I see under System -> System Dataset that volume0 is the System Dataset Pool.

Did you Remove volume0 without exporting it?
 

thalf

Dabbler
Joined
Mar 1, 2014
Messages
19
Did you Remove volume0 without exporting it?
Yup, I just shut down the machine and removed the disks, I didn't notice it was the system dataset pool and it didn't occur to me that I'd need to do anything special with volume0 as the disk removal was just temporary as I need to put them back in later to copy data off of them.
 
Joined
Jan 18, 2017
Messages
525
Is the new pool you created also named volume0?
 

thalf

Dabbler
Joined
Mar 1, 2014
Messages
19
No, I've called it vol4 (so there should be no clash). I'm just trying things out now so I want to be able to remove vol4 once I've made up my mind on naming etc..
 
Joined
Jan 18, 2017
Messages
525
well there goes that theory, if the system dataset says it is still on volume0 change it to the last pool you intend to remove and try to export vol4 again
 

thalf

Dabbler
Joined
Mar 1, 2014
Messages
19
Should I first put the volume0 pool disks back in the system so the dataset that's on them can be moved to a new pool, or is it enough to just set a new pool in the web GUI (and the middleware will write whatever's necessary to this new system dataset on another pool)?
 

thalf

Dabbler
Joined
Mar 1, 2014
Messages
19
Well, judging from the length of error messages, things are getting worse.

To move the system dataset from volume0 (whose disks I had removed to give room for the new disks), I decided to be extra careful, and put the volume0 disks back in again before selecting to move the dataset in the web GUI.

Before shutting down to put in the volume0 disks I wanted to try to do things The Right Way (TM) and export/disconnect vol4 (whose disks I would have to remove to put in the volume0 disks). But I couldn't, I got the usual "Error exporting/disconnecting pool." error message with the same trace as previously (as far as I could tell, if I remember correctly).

So I shut down, swapped disks, booted, went to System -> System Dataset in the web GUI, and changed from volume0 to volume2. That apparently worked fine, but before shutting down to swap disks again I tried to export/disconnect volume0 as I would remove those disks and put in the new vol4 disks again. That didn't work, same "Error exporting/disconnecting pool." error message.

After swapping disks and booting, I went to System -> System Dataset and confirmed that the system dataset was on volume2. So at least that worked.

Then I went to Storage -> Pools, clicked on the settings icon for vol4, and selected Export/Disconnect. No luck. I still get the Error exporting/disconnecting pool." error message popup, and now the trace is much longer than before:
Code:
Error: Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 137, in call_method
    result = await self.middleware._call(message['method'], serviceobj, methodobj, params, app=self,
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1195, in _call
    return await methodobj(*prepared_call.args)
  File "/usr/local/lib/python3.8/site-packages/middlewared/schema.py", line 973, in nf
    return await f(*args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/pool.py", line 1634, in attachments
    return await self.middleware.call('pool.dataset.attachments_with_path', pool['path'])
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1238, in call
    return await self._call(
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1195, in _call
    return await methodobj(*prepared_call.args)
  File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/pool.py", line 3714, in attachments_with_path
    for attachment in await delegate.query(path, True):
  File "/usr/local/lib/python3.8/site-packages/middlewared/common/attachment/__init__.py", line 94, in query
    for resource in await self.middleware.call(
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1238, in call
    return await self._call(
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1195, in _call
    return await methodobj(*prepared_call.args)
  File "/usr/local/lib/python3.8/site-packages/middlewared/schema.py", line 973, in nf
    return await f(*args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/middlewared/service.py", line 442, in query
    result = await self.middleware.call(
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1238, in call
    return await self._call(
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1195, in _call
    return await methodobj(*prepared_call.args)
  File "/usr/local/lib/python3.8/site-packages/middlewared/schema.py", line 973, in nf
    return await f(*args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/datastore/read.py", line 163, in query
    result = await self._queryset_serialize(
  File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/datastore/read.py", line 213, in _queryset_serialize
    result.append(await self._serialize(
  File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/datastore/read.py", line 229, in _serialize
    data = await self.middleware.call(extend, data, extend_context_value)
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1238, in call
    return await self._call(
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1195, in _call
    return await methodobj(*prepared_call.args)
  File "/usr/local/lib/python3.8/site-packages/middlewared/service.py", line 602, in sharing_task_extend
    data[self.locked_field] = await self.middleware.call(
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1238, in call
    return await self._call(
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1195, in _call
    return await methodobj(*prepared_call.args)
  File "/usr/local/lib/python3.8/site-packages/middlewared/service.py", line 591, in sharing_task_determine_locked
    return await self.middleware.call(
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1238, in call
    return await self._call(
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1206, in _call
    return await self.run_in_executor(prepared_call.executor, methodobj, *prepared_call.args)
  File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1110, in run_in_executor
    return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
  File "/usr/local/lib/python3.8/site-packages/middlewared/utils/io_thread_pool_executor.py", line 25, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/pool.py", line 2686, in path_in_locked_datasets
    return any(is_child(path, d['mountpoint']) for d in locked_datasets if d['mountpoint'])
  File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/pool.py", line 2686, in <genexpr>
    return any(is_child(path, d['mountpoint']) for d in locked_datasets if d['mountpoint'])
  File "/usr/local/lib/python3.8/site-packages/middlewared/utils/path.py", line 11, in is_child
    rel = os.path.relpath(child, parent)
  File "/usr/local/lib/python3.8/posixpath.py", line 453, in relpath
    raise ValueError("no path specified")
ValueError: no path specified

I'm curious/frustrated and want to know what path it is that isn't specified, but /var/log/middlewared.log doesn't mention that. It does however say it's sending a crash report. I don't know if it succeeds, though:
Code:
[2021/01/21 09:51:54] (DEBUG) middlewared.logger.CrashReporting.report():109 - Sending a crash report...
[2021/01/21 09:51:54] (DEBUG) urllib3.connectionpool._get_conn():266 - Resetting dropped connection: sentry.ixsystems.com
[2021/01/21 09:51:54] (DEBUG) urllib3.connectionpool._make_request():428 - https://sentry.ixsystems.com:443 "POST /api/2/store/ HTTP/1.1" 200 41

I've tried exporting/disconnecting on all my pools (both volume1, volume2 and vol4 which are online, and volume0 whose disks aren't present), but I get the same error for all of them.

So... what could be wrong? Can I increase error log verbosity to find out what path it is that is causing the failure?
 
Joined
Jan 18, 2017
Messages
525
I would file bug report at https://jira.ixsystems.com/projects/NAS
I looked there and did not see any issue the same as yours but there were a couple other export issues, the dev's will have the tools to understand the traceback.
 

thalf

Dabbler
Joined
Mar 1, 2014
Messages
19
Hi, I've submitted a bug report for this issue, https://jira.ixsystems.com/browse/NAS-109079 , and also another bug report which I suspect is related, https://jira.ixsystems.com/browse/NAS-109085 .

I guess some people in here have submitted bug reports before, how long does it usually take for iXsystems to work through bug reports and provide a solution? I'd like to be able to help them, testing any suggestions they might have and answering questions, but I also need to get my NAS back in a usable state so unfortunately I can't give it more than a couple of days.

If it's usually a matter of weeks or even months to solve bugs I'll have to reinstall (go through the configuration page by page in the gui, noting everything down / taking screenshots, do a reinstall, and then manually configure everything again, since saving the config and then importing it to a freshly installed system apparently doesn't work) and then won't be able to test things out if they have suggestions.
 
Top