Webui login not available | bug?

centauri

Cadet
Joined
Dec 14, 2021
Messages
6
Fore some reason the webui isn't showing the login page anymore. Luckily i enabled SSH but even there login is delayed ans shows an error.
After a reboot it is back to normal but seems to fail again after a application refresh catalog.

image_2022-01-30_154640.png


Stopping or restarting middlewared doesn't seem to work


Code:
root@truenas[~]# service middlewared stop
Failed to allocate directory watch: Too many open files
zsh: command not found: Failed
root@truenas[~]# service middlewared start
Failed to allocate directory watch: Too many open files
Job for middlewared.service failed because of unavailable resources or another system error.
See "systemctl status middlewared.service" and "journalctl -xe" for details.


Code:
░░ The unit run-docker-runtime\x2drunc-moby-c4a7ca754ca6bddabd204d9de5952b00a6e95ef10acd3fa219e0dfa25b2b34c3-runc.eyBRr4.mount has successfully entered the 'dead' state.
Jan 31 10:16:46 truenas.local.acme.com collectd[10297]: Traceback (most recent call last):
                                                                File "/usr/local/lib/collectd_pyplugins/cputemp.py", line 21, in read
                                                                  with Client() as c:
                                                                File "/usr/lib/python3/dist-packages/middlewared/client/client.py", line 317, in __init__
                                                                  self._ws.connect()
                                                                File "/usr/lib/python3/dist-packages/middlewared/client/client.py", line 126, in connect
                                                                  rv = super(WSClient, self).connect()
                                                                File "/usr/lib/python3/dist-packages/ws4py/client/__init__.py", line 215, in connect
                                                                  self.sock.connect(self.bind_addr)
                                                              ConnectionRefusedError: [Errno 111] Connection refused
Jan 31 10:16:56 truenas.local.acme.com collectd[10297]: Traceback (most recent call last):
                                                                File "/usr/local/lib/collectd_pyplugins/cputemp.py", line 21, in read
                                                                  with Client() as c:
                                                                File "/usr/lib/python3/dist-packages/middlewared/client/client.py", line 317, in __init__
                                                                  self._ws.connect()
                                                                File "/usr/lib/python3/dist-packages/middlewared/client/client.py", line 126, in connect
                                                                  rv = super(WSClient, self).connect()
                                                                File "/usr/lib/python3/dist-packages/ws4py/client/__init__.py", line 215, in connect
                                                                  self.sock.connect(self.bind_addr)
                                                              ConnectionRefusedError: [Errno 111] Connection refused
 
Last edited:

centauri

Cadet
Joined
Dec 14, 2021
Messages
6
There are a lot of these messages in middlewared log

Code:
[2022/02/01 23:09:19] (ERROR) pyroute2.ndb.140097807837072.sources.nsmanager.receiver():294 - source error: <class 'ValueError'> file descriptor cannot be a negative integer (-1)
[2022/02/01 23:09:19] (ERROR) pyroute2.ndb.140097277328976.sources.nsmanager.receiver():294 - source error: <class 'ValueError'> file descriptor cannot be a negative integer (-1)
[2022/02/01 23:09:19] (ERROR) pyroute2.ndb.140101924576272.sources.nsmanager.receiver():294 - source error: <class 'ValueError'> file descriptor cannot be a negative integer (-1)
[2022/02/01 23:09:19] (ERROR) pyroute2.ndb.140098914157280.sources.nsmanager.receiver():294 - source error: <class 'ValueError'> file descriptor cannot be a negative integer (-1)
[2022/02/01 23:09:19] (ERROR) pyroute2.ndb.140101903466608.sources.nsmanager.receiver():294 - source error: <class 'ValueError'> file descriptor cannot be a negative integer (-1)
[2022/02/01 23:09:19] (ERROR) pyroute2.ndb.140098764770368.sources.nsmanager.receiver():294 - source error: <class 'ValueError'> file descriptor cannot be a negative integer (-1)
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
Is this 22.02-RC2?
Seems unusual and more likely to be your specific environment.
Can you rollback?
 

centauri

Cadet
Joined
Dec 14, 2021
Messages
6
Yes 22.02-RC2. It just happened again when i started an app and starting the view logs.
 

colemar

Dabbler
Joined
Feb 12, 2022
Messages
13
I have the same issue, TrueNAS Scale fresh installation on new HDD drive and restored from configuration. No other error logs and no degradation in Boot Pool.
New alerts:
  • Failed to check for alert BootPoolStatus: concurrent.futures.process._RemoteTraceback: """ Traceback (most recent call last): File "/usr/lib/python3.9/concurrent/futures/process.py", line 243, in _process_worker r = call_item.fn(*call_item.args, **call_item.kwargs) File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 97, in main_worker res = MIDDLEWARE._run(*call_args) File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 45, in _run return self._call(name, serviceobj, methodobj, args, job=job) File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 33, in _call with Client('ws+unix:///var/run/middlewared-internal.sock', py_exceptions=True) as c: File "/usr/lib/python3/dist-packages/middlewared/client/client.py", line 320, in __init__ raise ClientException('Failed connection handshake') middlewared.client.client.ClientException: Failed connection handshake """ The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/middlewared/plugins/alert.py", line 773, in __run_source alerts = (await alert_source.check()) or [] File "/usr/lib/python3/dist-packages/middlewared/alert/source/boot_pool.py", line 16, in check pool = await self.middleware.call("zfs.pool.query", [["id", "=", boot_pool]]) File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1324, in call return await self._call( File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1289, in _call return await self._call_worker(name, *prepared_call.args) File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1295, in _call_worker return await self.run_in_proc(main_worker, name, args, job) File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1218, in run_in_proc return await self.run_in_executor(self.__procpool, method, *args, **kwargs) File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1192, in run_in_executor return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs)) middlewared.client.client.ClientException: Failed connection handshake
Current alerts:
  • Failed to check for alert BootPoolStatus: concurrent.futures.process._RemoteTraceback: """ Traceback (most recent call last): File "/usr/lib/python3.9/concurrent/futures/process.py", line 243, in _process_worker r = call_item.fn(*call_item.args, **call_item.kwargs) File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 97, in main_worker res = MIDDLEWARE._run(*call_args) File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 45, in _run return self._call(name, serviceobj, methodobj, args, job=job) File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 33, in _call with Client('ws+unix:///var/run/middlewared-internal.sock', py_exceptions=True) as c: File "/usr/lib/python3/dist-packages/middlewared/client/client.py", line 320, in __init__ raise ClientException('Failed connection handshake') middlewared.client.client.ClientException: Failed connection handshake """ The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/middlewared/plugins/alert.py", line 773, in __run_source alerts = (await alert_source.check()) or [] File "/usr/lib/python3/dist-packages/middlewared/alert/source/boot_pool.py", line 16, in check pool = await self.middleware.call("zfs.pool.query", [["id", "=", boot_pool]]) File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1324, in call return await self._call( File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1289, in _call return await self._call_worker(name, *prepared_call.args) File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1295, in _call_worker return await self.run_in_proc(main_worker, name, args, job) File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1218, in run_in_proc return await self.run_in_executor(self.__procpool, method, *args, **kwargs) File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1192, in run_in_executor return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs)) middlewared.client.client.ClientException: Failed connection handshake
 

bsaurusrex

Cadet
Joined
Feb 26, 2022
Messages
7
I am having similar errors. Having a hard time getting this server stable and having to reboot every day now.
Not sure if 100% related but:

Also seeing
[2022/04/07 09:11:59] (WARNING) middlewared._loop_monitor_thread():1615 - Task seems blocked:
File "/usr/lib/python3/dist-packages/kubernetes_asyncio/client/api_client.py", line 296, in __deserialize
return [self.__deserialize(sub_data, sub_kls)
File "/usr/lib/python3/dist-packages/kubernetes_asyncio/client/api_client.py", line 296, in <listcomp>
return [self.__deserialize(sub_data, sub_kls)
File "/usr/lib/python3/dist-packages/kubernetes_asyncio/client/api_client.py", line 319, in __deserialize
return self.__deserialize_model(data, klass)
File "/usr/lib/python3/dist-packages/kubernetes_asyncio/client/api_client.py", line 659, in __deserialize_model
kwargs[attr] = self.__deserialize(value, attr_type)
File "/usr/lib/python3/dist-packages/kubernetes_asyncio/client/api_client.py", line 319, in __deserialize
return self.__deserialize_model(data, klass)
File "/usr/lib/python3/dist-packages/kubernetes_asyncio/client/api_client.py", line 659, in __deserialize_model
kwargs[attr] = self.__deserialize(value, attr_type)
File "/usr/lib/python3/dist-packages/kubernetes_asyncio/client/api_client.py", line 296, in __deserialize
return [self.__deserialize(sub_data, sub_kls)
File "/usr/lib/python3/dist-packages/kubernetes_asyncio/client/api_client.py", line 296, in <listcomp>
return [self.__deserialize(sub_data, sub_kls)
File "/usr/lib/python3/dist-packages/kubernetes_asyncio/client/api_client.py", line 319, in __deserialize
return self.__deserialize_model(data, klass)
File "/usr/lib/python3/dist-packages/kubernetes_asyncio/client/api_client.py", line 658, in __deserialize_model
value = data[klass.attribute_map[attr]]

Apr 07 13:15:09 ud.com k3s[12300]: E0407 13:15:09.766962 12300 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}: context canceled
Apr 07 13:15:09 om k3s[12300]: E0407 13:15:09.766973 12300 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://127.0.0.1:6443/apis/coordin...kube-node-lease/leases/ix-truenas?timeout=10s": context deadline exceeded
Apr 07 13:15:09 ud.com k3s[12300]: E0407 13:15:09.767227 12300 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
Apr 07 13:15:09 d.com k3s[12300]: E0407 13:15:09.768170 12300 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
Apr 07 13:15:09 oud.com k3s[12300]: E0407 13:15:09.769186 12300 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
Apr 07 13:15:09ud.com k3s[12300]: I0407 13:15:09.770280 12300 trace.go:205] Trace[312980892]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ix-truenas,user-agent:k3s/v1.21.0 (linux/amd64) kubernetes/bcdd3fe,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf,a>
Apr 07 13:15:09 loud.com k3s[12300]: Trace[312980892]: [5.758193843s] [5.758193843s] END
Apr 07 13:15:11 adcloud.com collectd[8505]: Traceback (most recent call last):
File "/usr/local/lib/collectd_pyplugins/cputemp.py", line 21, in read
with Client() as c:
File "/usr/lib/python3/dist-packages/middlewared/client/client.py", line 317, in __init__
self._ws.connect()
File "/usr/lib/python3/dist-packages/middlewared/client/client.py", line 126, in connect
rv = super(WSClient, self).connect()
File "/usr/lib/python3/dist-packages/ws4py/client/__init__.py", line 215, in connect
self.sock.connect(self.bind_addr)
BlockingIOError: [Errno 11] Resource temporarily unavailable
Apr 07 13:15:13 ud.com k3s[12300]: time="2022-04-07T13:15:13.510116074+08:00" level=error msg="error while range on /registry/minions/ix-truenas : database is locked"
Apr 07 13:15:13 oud.com k3s[12300]: time="2022-04-07T13:15:13.519188005+08:00" level=error msg="error while range on /registry/persistentvolumeclaims/ix-pihole01/pihole01-config : database is locked"
Apr 07 13:15:18cloud.com k3s[12300]: I0407 13:15:18.507617 12300 trace.go:205] Trace[1459144410]: "GuaranteedUpdate etcd3" type:*unstructured.Unstructured (07-Apr-2022 13:14:44.508) (total time: 33999ms):
Apr 07 13:15:18 oud.com k3s[12300]: Trace[1459144410]: [33.999491308s] [33.999491308s] END
Apr 07 13:15:18 oud.com k3s[12300]: I0407 13:15:18.507975 12300 trace.go:205] Trace[167964176]: "Update" url:/apis/zfs.openebs.io/v1/namespaces/openebs/zfsnodes/ix-truenas,user-agent:zfs-driver/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.1.103,accept:application/json, */*,protocol:HTTP>
Apr 07 13:15:18 oud.com k3s[12300]: Trace[167964176]: [34.000552775s] [34.000552775s] END


Apr 07 13:14:00 oud.com systemd[1]: middlewared.service: Unit process 142035 (asyncio_loop) remains running after unit stopped.
Apr 07 13:14:00 ud.com systemd[1]: middlewared.service: Unit process 142746 (python3) remains running after unit stopped.
 
Last edited:
Top