All Apps unable to update (TrueNAS-SCALE-22.02.4)

Kasazn

Explorer
Joined
Apr 17, 2021
Messages
60
Hi lads,

As per thread title, every time I try to update my Apps error like this pops up. This happens after the recent TrueNAS update. Any ideas?

Code:
[EFAULT] Failed to upgrade chart release: Error: UPGRADE FAILED: cannot patch "deluge-config" with kind PersistentVolumeClaim: PersistentVolumeClaim "deluge-config" is invalid: spec.resources.requests.storage: Forbidden: field can not be less than previous value



Code:
Error: Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 411, in run
    await self.future
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 446, in __run_body
    rv = await self.method(*([self] + args))
  File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1272, in nf
    return await func(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1140, in nf
    res = await f(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/middlewared/plugins/chart_releases_linux/upgrade.py", line 116, in upgrade
    await self.upgrade_chart_release(job, release, options)
  File "/usr/lib/python3/dist-packages/middlewared/plugins/chart_releases_linux/upgrade.py", line 299, in upgrade_chart_release
    await self.middleware.call('chart.release.helm_action', release_name, chart_path, config, 'upgrade')
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1345, in call
    return await self._call(
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1305, in _call
    return await self.run_in_executor(prepared_call.executor, methodobj, *prepared_call.args)
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1206, in run_in_executor
    return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
  File "/usr/lib/python3.9/concurrent/futures/thread.py", line 52, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/usr/lib/python3/dist-packages/middlewared/plugins/chart_releases_linux/helm.py", line 44, in helm_action
    raise CallError(f'Failed to {tn_action} chart release: {stderr.decode()}')
middlewared.service_exception.CallError: [EFAULT] Failed to upgrade chart release: Error: UPGRADE FAILED: cannot patch "deluge-config" with kind PersistentVolumeClaim: PersistentVolumeClaim "deluge-config" is invalid: spec.resources.requests.storage: Forbidden: field can not be less than previous value
 

newguy123

Dabbler
Joined
Jun 5, 2022
Messages
23
same problem. i manage 3 diiferent instances of scale that are all 22.02.4 and this is what i get now. also, they apps are updating, but this error persists...
 

Kasazn

Explorer
Joined
Apr 17, 2021
Messages
60
same problem. i manage 3 diiferent instances of scale that are all 22.02.4 and this is what i get now. also, they apps are updating, but this error persists...

Can confirm this behavior. Just noticed.
 

om1d3

Cadet
Joined
Sep 26, 2023
Messages
4
Hi all,

sorry to revive this dead thread, but I am experiencing this exact situation on TrueNAS-SCALE-22.12.4.2 with Truecharts Apps.
I have attached a screenshot that should show most relevant details.

Quick dumb question: any advice in order to address what am I doing wrong?

Thank you.
 

Attachments

  • SCR-20231229-ulgk.png
    SCR-20231229-ulgk.png
    1 MB · Views: 86

tprelog

Patron
Joined
Mar 2, 2016
Messages
297

om1d3

Cadet
Joined
Sep 26, 2023
Messages
4
Thank you for your answer, tprelog.

I want to migrate to Cobia, but I am worried that the configured Apps are not going to function as they are right now, so I am investigating what this upgrade means and how can I prevent any unwanted downtime since my main currently configured App is Home Assistant and I really want to not jeopardize the Family approval factor.

If you have any advice for me regarding this, I am all ears.

Thanks again.
 

tprelog

Patron
Joined
Mar 2, 2016
Messages
297
Sorry, I can't provide any advice here; you need to check with the TrueCharts team - I do not use any of these "apps" because I prefer the simplicity of docker / docker-compose for home use over Kubernetes.

As for Home Assistant, I'm even less a fan of these "apps" because I depend on Home Assistant to be reliable, and these "apps" do not provide the same experience the Home Assistant developers have intended. A Home Assistant backup made by these "apps," by default, is not even compatible with an "official" Home Assistant installation. Obviously, there are A LOT of successful users with these Home Assistant "apps" but the HA developers do not support them. Similar to how you can enable apt on TrueNAS to install some arbitrary package, and you may be successful, but you're not going to be supported by ix-systems.

For many years, I swore I'd never use HAOS, but as I became more dependent on Home Assistant and HAOS development has improved, I've slowly changed my mind. I've been running HAOS for over a year now, and for me, there's no looking back.
 
Last edited:

om1d3

Cadet
Joined
Sep 26, 2023
Messages
4
I understand and appreciate your answers.
Thank you.

As far as Home Assistant is concerned, my thought process started from the necessity of having TrueNAS up at all times and decided to piggyback on this opportunity to also use the same machine for Home Assistant. While using the App, I have come to not be happy with the solution, so I am working on building my own independent Kubernetes cluster on which I want to deploy all of the currently used Apps inside TrueNAS and keep using it for storage as intended by design. As far as why Kubernetes over docker compose, I want to push myself out of my confort zone and into the necessity of learning to reliably manage multiple Home Assistant instances that communicate via MQTT with ZigBee2MQTT using Mosquitto as the broker.
It is by far not a comfortable design choice, but it sets necessity boundaries and I can grow my skills while improving the resilience of my home deployments.
 

om1d3

Cadet
Joined
Sep 26, 2023
Messages
4
ok, after doing my best to try and make this migration as painless as possible, I have upgraded today to Cobia hoping for the best.
After addressing the inherent time issues that make all installed Apps disappear after an upgrade, I have found the two most important apps non-functioning: cloudnative-pg and home-assistant. Considering that home-assistant is relying on cloudnative-pg for running, I am still trying to unravel the causality string and understand if the cloudnative-pg is actually the missing link or if both are facing unexpected issues.



here is the kubectl describe pod output:
Code:
Name:             cloudnative-pg-74b64968d8-s8wcx
Namespace:        ix-cloudnative-pg
Priority:         0
Service Account:  cloudnative-pg
Node:             ix-truenas/192.168.21.121
Start Time:       Fri, 05 Jan 2024 18:58:27 -0500
Labels:           app=cloudnative-pg-2.0.12
                  app.kubernetes.io/instance=cloudnative-pg
                  app.kubernetes.io/managed-by=Helm
                  app.kubernetes.io/name=cloudnative-pg
                  app.kubernetes.io/version=1.21.1
                  helm-revision=9
                  helm.sh/chart=cloudnative-pg-2.0.12
                  pod-template-hash=74b64968d8
                  pod.name=main
                  release=cloudnative-pg
Annotations:      k8s.v1.cni.cncf.io/network-status:
                    [{
                        "name": "ix-net",
                        "interface": "eth0",
                        "ips": [
                            "172.16.0.166"
                        ],
                        "mac": "66:26:71:25:1d:7e",
                        "default": true,
                        "dns": {},
                        "gateway": [
                            "172.16.0.1"
                        ]
                    }]
                  rollme: eYqN6
Status:           Pending
IP:               172.16.0.166
IPs:
  IP:           172.16.0.166
Controlled By:  ReplicaSet/cloudnative-pg-74b64968d8
Containers:
  cloudnative-pg:
    Container ID:
    Image:         tccr.io/truecharts/cloudnative-pg:v1.21.1@sha256:163bc6e7f03c15fb0c68a14ff30c8f9f6e2a990d7c1034df0e2b473c5116cab5
    Image ID:
    Ports:         9443/TCP, 8080/TCP
    Host Ports:    0/TCP, 0/TCP
    Command:
      /manager
    Args:
      controller
      --leader-elect
      --config-map-name=cloudnative-pg-config
      --secret-name=cloudnative-pg-config
      --webhook-port=9443
    State:          Waiting
      Reason:       ImagePullBackOff
    Ready:          False
    Restart Count:  0
    Limits:
      cpu:     4
      memory:  8Gi
    Requests:
      cpu:      10m
      memory:   50Mi
    Liveness:   http-get https://:webhook/readyz delay=10s timeout=5s period=10s #success=1 #failure=5
    Readiness:  http-get https://:webhook/readyz delay=10s timeout=5s period=10s #success=2 #failure=5
    Startup:    tcp-socket :webhook delay=10s timeout=2s period=5s #success=1 #failure=60
    Environment:
      TZ:                            UTC
      UMASK:                         0022
      UMASK_SET:                     0022
      NVIDIA_VISIBLE_DEVICES:        void
      S6_READ_ONLY_ROOT:             1
      MONITORING_QUERIES_CONFIGMAP:  cloudnative-pg-monitoring
      OPERATOR_IMAGE_NAME:           tccr.io/truecharts/cloudnative-pg:v1.21.1@sha256:163bc6e7f03c15fb0c68a14ff30c8f9f6e2a990d7c1034df0e2b473c5116cab5
      OPERATOR_NAMESPACE:            ix-cloudnative-pg (v1:metadata.namespace)
    Mounts:
      /controller from scratch-data (rw)
      /dev/shm from devshm (rw)
      /run/secrets/cnpg.io/webhook from webhook-certificates (ro)
      /shared from shared (rw)
      /tmp from tmp (rw)
      /var/logs from varlogs (rw)
      /var/run from varrun (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dtvst (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  devshm:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     Memory
    SizeLimit:  <unset>
  scratch-data:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  shared:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  tmp:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  varlogs:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  varrun:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     Memory
    SizeLimit:  <unset>
  webhook-certificates:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  cnpg-webhook-cert
    Optional:    true
  kube-api-access-dtvst:
    Type:                     Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:   3607
    ConfigMapName:            kube-root-ca.crt
    ConfigMapOptional:        <nil>
    DownwardAPI:              true
QoS Class:                    Burstable
Node-Selectors:               kubernetes.io/arch=amd64
Tolerations:                  node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                              node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Topology Spread Constraints:  kubernetes.io/hostname:ScheduleAnyway when max skew 1 is exceeded for selector app.kubernetes.io/instance=cloudnative-pg,app.kubernetes.io/name=cloudnative-pg,pod.name=cloudnative-pg
                              truecharts.org/rack:ScheduleAnyway when max skew 1 is exceeded for selector app.kubernetes.io/instance=cloudnative-pg,app.kubernetes.io/name=cloudnative-pg,pod.name=cloudnative-pg
Events:
  Type     Reason          Age                  From               Message
  ----     ------          ----                 ----               -------
  Normal   Scheduled       2m58s                default-scheduler  Successfully assigned ix-cloudnative-pg/cloudnative-pg-74b64968d8-s8wcx to ix-truenas
  Normal   AddedInterface  2m58s                multus             Add eth0 [172.16.0.166/16] from ix-net
  Normal   Pulling         88s (x4 over 2m58s)  kubelet            Pulling image "tccr.io/truecharts/cloudnative-pg:v1.21.1@sha256:163bc6e7f03c15fb0c68a14ff30c8f9f6e2a990d7c1034df0e2b473c5116cab5"
  Warning  Failed          87s (x4 over 2m57s)  kubelet            Failed to pull image "tccr.io/truecharts/cloudnative-pg:v1.21.1@sha256:163bc6e7f03c15fb0c68a14ff30c8f9f6e2a990d7c1034df0e2b473c5116cab5": rpc error: code = NotFound desc = failed to pull and unpack image "tccr.io/truecharts/cloudnative-pg@sha256:163bc6e7f03c15fb0c68a14ff30c8f9f6e2a990d7c1034df0e2b473c5116cab5": failed to resolve reference "tccr.io/truecharts/cloudnative-pg@sha256:163bc6e7f03c15fb0c68a14ff30c8f9f6e2a990d7c1034df0e2b473c5116cab5": tccr.io/truecharts/cloudnative-pg@sha256:163bc6e7f03c15fb0c68a14ff30c8f9f6e2a990d7c1034df0e2b473c5116cab5: not found
  Warning  Failed          87s (x4 over 2m57s)  kubelet            Error: ErrImagePull
  Warning  Failed          75s (x6 over 2m56s)  kubelet            Error: ImagePullBackOff
  Normal   BackOff         64s (x7 over 2m56s)  kubelet            Back-off pulling image "tccr.io/truecharts/cloudnative-pg:v1.21.1@sha256:163bc6e7f03c15fb0c68a14ff30c8f9f6e2a990d7c1034df0e2b473c5116cab5"


The most simplest is: what am I doing wrong?

PS: thank you for your time spend reading this and trying to help me wrap my head around this entire mess I have created.
 
Top