Failed to start kubernetes cluster for Applications: Received 500 response code from '/api/v1/secrets?

Joined
Feb 8, 2023
Messages
4
Hello,

I have been using TrueNAS Scale Bluefin for a few months and it has been working great. I shutdown my machine this afternoon to upgrade the memory. Booted it back up and I am stuck with a strange error.

Code:
CRITICAL
Failed to start kubernetes cluster for Applications: Received 500 response code from '/api/v1/secrets?fieldSelector=type%3Dhelm.sh%2Frelease.v1'

My installed applications are not showing and there is an error that states, "Applications are not running". I try to, "Unset Pool", but I get the following error:

Code:
FAILED

Received 500 response code from '/api/v1/secrets?fieldSelector=type%3Dhelm.sh%2Frelease.v1'

 Error: Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 426, in run
    await self.future
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 461, in __run_body
    rv = await self.method(*([self] + args))
  File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1152, in nf
    res = await f(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1284, in nf
    return await func(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/middlewared/plugins/kubernetes_linux/update.py", line 437, in do_update
    await self.middleware.call('chart.release.clear_update_alerts_for_all_chart_releases')
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1306, in call
    return await self._call(
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1255, in _call
    return await methodobj(*prepared_call.args)
  File "/usr/lib/python3/dist-packages/middlewared/plugins/chart_releases_linux/upgrade.py", line 449, in clear_update_alerts_for_all_chart_releases
    for chart_release in await self.middleware.call('chart.release.query'):
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1306, in call
    return await self._call(
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1255, in _call
    return await methodobj(*prepared_call.args)
  File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1152, in nf
    res = await f(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1284, in nf
    return await func(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/middlewared/plugins/chart_releases_linux/chart_release.py", line 180, in query
    release_secrets = await self.middleware.call('chart.release.releases_secrets', extra)
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1306, in call
    return await self._call(
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1266, in _call
    return await self.run_in_executor(prepared_call.executor, methodobj, *prepared_call.args)
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1169, in run_in_executor
    return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
  File "/usr/lib/python3.9/concurrent/futures/thread.py", line 52, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/usr/lib/python3/dist-packages/middlewared/plugins/chart_releases_linux/secrets_management.py", line 36, in releases_secrets
    secrets = self.middleware.call_sync(
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1325, in call_sync
    return self.run_coroutine(methodobj(*prepared_call.args))
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1365, in run_coroutine
    return fut.result()
  File "/usr/lib/python3.9/concurrent/futures/_base.py", line 433, in result
    return self.__get_result()
  File "/usr/lib/python3.9/concurrent/futures/_base.py", line 389, in __get_result
    raise self._exception
  File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1152, in nf
    res = await f(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1284, in nf
    return await func(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/middlewared/plugins/kubernetes_linux/k8s_base_resources.py", line 30, in query
    d for d in (await self.KUBERNETES_RESOURCE.query(**kwargs))['items']
  File "/usr/lib/python3/dist-packages/middlewared/plugins/kubernetes_linux/k8s/client.py", line 99, in query
    return await cls.call(
  File "/usr/lib/python3/dist-packages/middlewared/plugins/kubernetes_linux/k8s/client.py", line 84, in call
    return await cls.api_call(uri, mode, body, headers, **kwargs)
  File "/usr/lib/python3/dist-packages/middlewared/plugins/kubernetes_linux/k8s/client.py", line 45, in api_call
    async with cls.request(endpoint, mode, body, headers, timeout) as resp:
  File "/usr/lib/python3.9/contextlib.py", line 175, in __aenter__
    return await self.gen.__anext__()
  File "/usr/lib/python3/dist-packages/middlewared/plugins/kubernetes_linux/k8s/client.py", line 33, in request
    raise ApiException(f'Received {resp.status!r} response code from {endpoint!r}')
middlewared.plugins.kubernetes_linux.k8s.exceptions.ApiException: Received 500 response code from '/api/v1/secrets?fieldSelector=type%3Dhelm.sh%2Frelease.v1'


I checked the status of k3s and it is running:

Code:
root@truenas[~]# systemctl status k3s
● k3s.service - Lightweight Kubernetes
     Loaded: loaded (/lib/systemd/system/k3s.service; disabled; vendor preset: disabled)
     Active: active (running) since Wed 2023-02-08 18:29:00 MST; 19min ago
       Docs: https://k3s.io
    Process: 201578 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
    Process: 201579 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
   Main PID: 201582 (k3s-server)
      Tasks: 48
     Memory: 887.9M
        CPU: 4min 6.159s
     CGroup: /system.slice/k3s.service
             └─201582 /usr/local/bin/k3s server

Feb 08 18:47:51 truenas k3s[201582]: E0208 18:47:51.989647  201582 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/zfs.csi.openebs.io^pvc-f1fb05d1-b4f1-4442-8785-198eb1e7ea88 podName:d9d80676-a4b1>
Feb 08 18:47:51 truenas k3s[201582]: E0208 18:47:51.990477  201582 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/zfs.csi.openebs.io^pvc-ac43cf67-5b30-41aa-8f80-127980da6768 podName:619443c5-9093>
Feb 08 18:47:51 truenas k3s[201582]: E0208 18:47:51.990644  201582 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/zfs.csi.openebs.io^pvc-31caee9e-2d95-4231-87d4-0694d43cd68a podName:b4053797-22a2>
Feb 08 18:47:51 truenas k3s[201582]: E0208 18:47:51.991051  201582 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/zfs.csi.openebs.io^pvc-4101eccf-bb14-49b1-96f8-ea6cff109d73 podName:b0905c29-ab0b>
Feb 08 18:47:51 truenas k3s[201582]: E0208 18:47:51.991848  201582 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/zfs.csi.openebs.io^pvc-f7b6b4b6-af51-4c9a-9249-34325002397e podName:0ab402a4-08df>
Feb 08 18:47:51 truenas k3s[201582]: E0208 18:47:51.993661  201582 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/zfs.csi.openebs.io^pvc-83b3a9ef-fea7-47d4-a379-91e2ce04eedc podName:d9d80676-a4b1>
Feb 08 18:47:51 truenas k3s[201582]: E0208 18:47:51.993725  201582 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/zfs.csi.openebs.io^pvc-b0805f1c-5684-4460-8b34-4316a42231fa podName:f57e7d56-5dcd>
Feb 08 18:47:51 truenas k3s[201582]: E0208 18:47:51.994449  201582 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/zfs.csi.openebs.io^pvc-8da163d1-2436-4489-a402-3a11d0ff427e podName:0ab402a4-08df>
Feb 08 18:47:55 truenas k3s[201582]: time="2023-02-08T18:47:55-07:00" level=info msg="Using CNI configuration file /etc/cni/net.d/00-multus.conf"
Feb 08 18:48:00 truenas k3s[201582]: time="2023-02-08T18:48:00-07:00" level=info msg="Using CNI configuration file /etc/cni/net.d/00-multus.conf"


Any suggestions?

Thanks
 
Joined
Feb 8, 2023
Messages
4
Digging around a bit more I have noticed that my apps like Plex are running and I can access them on my Roku (not from the web app though?).

1675910620684.png


Also when I run, "helm list" I get an error.
root@truenas[~]# helm list
Error: Kubernetes cluster unreachable: Get "http://localhost:8080/version": dial tcp [::1]:8080: connect: connection refuse
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
You seem to have an IPv6 issue. You could try disabling IPv6 by going to System->Advanced->Sysctl to set a sysctl tunable variable net.ipv6.conf.all.disable_ipv6 with value 1 to completely disable IPv6. I don't believe the Kubernetes installation takes any advantage of IPv6.
 
Joined
Feb 8, 2023
Messages
4
Hi Samuel, appreciate the response. I have disabled IPv6 and have restarted the server. Unfortunately I am getting the same error.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
All I can tell from the error is there's something funky going on with the Kubernetes secrets service, but as I don't run Scale, I can't tell what the problem is. The error message is saying Kubernetes couldn't retrieve the secret to unlock the Helm charts.
 
Joined
Feb 8, 2023
Messages
4
Okay. Thanks for your help. I think I am going to install a fresh version of Scale and then import my pools.
 
Top