Can no longer view jails. [EFAULT] Please set a mountpoint on main-pool/iocage

itw

Dabbler
Joined
Aug 31, 2011
Messages
48
Upgraded from 12.0-U3 to 12.0-U3.1 last night and my jails no longer run. When I bring up Jails in the UI I get this. These were created under 12-U3 just a bit ago and were running fine. They are simple FreeBSD 12 jails.

[EFAULT] Error occurred getting activated pool: Please set a mountpoint on main-pool/iocage

> More Info:

Error: Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/jail_freebsd.py", line 1046, in get_activated_pool
pool = ioc.IOCage(skip_jails=True, reset_cache=True).get('', pool=True)
File "/usr/local/lib/python3.8/site-packages/iocage_lib/iocage.py", line 95, in __init__
self.generic_iocjson = ioc_json.IOCJson()
File "/usr/local/lib/python3.8/site-packages/iocage_lib/ioc_json.py", line 1372, in __init__
super().__init__(location, checking_datasets, silent, callback)
File "/usr/local/lib/python3.8/site-packages/iocage_lib/ioc_json.py", line 429, in __init__
self.pool, self.iocroot = self.get_pool_and_iocroot()
File "/usr/local/lib/python3.8/site-packages/iocage_lib/ioc_json.py", line 572, in get_pool_and_iocroot
return pool, get_iocroot()
File "/usr/local/lib/python3.8/site-packages/iocage_lib/ioc_json.py", line 564, in get_iocroot
iocage_lib.ioc_common.logit(
File "/usr/local/lib/python3.8/site-packages/iocage_lib/ioc_common.py", line 107, in logit
callback(content, exception)
File "/usr/local/lib/python3.8/site-packages/iocage_lib/ioc_common.py", line 80, in callback
raise callback_exception(message)
RuntimeError: Please set a mountpoint on main-pool/iocage

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 137, in call_method
result = await self.middleware._call(message['method'], serviceobj, methodobj, params, app=self,
File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1206, in _call
return await self.run_in_executor(prepared_call.executor, methodobj, *prepared_call.args)
File "/usr/local/lib/python3.8/site-packages/middlewared/main.py", line 1110, in run_in_executor
return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
File "/usr/local/lib/python3.8/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/local/lib/python3.8/site-packages/middlewared/schema.py", line 977, in nf
return f(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/middlewared/plugins/jail_freebsd.py", line 1048, in get_activated_pool
raise CallError(f'Error occurred getting activated pool: {e}')
middlewared.service_exception.CallError: [EFAULT] Error occurred getting activated pool: Please set a mountpoint on main-pool/iocage

Any thought appreciated.
 

itw

Dabbler
Joined
Aug 31, 2011
Messages
48
It's worse than that. It's like the datasets aren't properly mounting. More than just iocage. I can see them in zfs info but the mount points are empty in the shell.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
you could try zfs mount -a

Also looking into dmesg and zpool status might shed some light on whatever the problem is.

Clearly if your iocage mount isn't there, you won't have jails.
 
Last edited:

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,977
Hardware specs would be helpful as well.
 

itw

Dabbler
Joined
Aug 31, 2011
Messages
48
OK. zfs mount -a got them mounted and I can view the data that was missing again and start the jails, etc. No idea why they did not mount when it was booted. I don't see anything that stands out in the logs.

I deleted a several hundred snapshots I did not need any more before I updated to 12.0-U3.1.

Hardware is a Dell R610 6-bay. TrueNAS in installed on a USB memory stick. The RAIDZ2 pool is on 6x600GB disks. 32GB RAM 2x Xeon X5650.
 

itw

Dabbler
Joined
Aug 31, 2011
Messages
48
It looks like I had managed to delete a snapshot that was relied upon for a "clone to new dataset" that was still trying to be mounted. Or something along those lines.

I deleted those datasets and also deleted an NFS export that was referencing a stale dataset and rebooted and it all mounted.

Thanks for pointing me at zfs mount -a
 
Top