Upgrade to Bluefin broke my system

jeroenst

Cadet
Joined
Oct 19, 2022
Messages
6
I upgraded to bluefin (22.12.2) when all the issues started, employees coudn't access samba shares anymore, graphs where broken and docker images failed to start making the software unusable.

After downgrading to Angelfish (22.02.4) through the boot menu, shares are accesable again, but docker fails to start and docker images are not shown in apps, even when unsetting and choosing the pool again.

Also when I try to save the docker advanced settings I get the error that docker is not running:

Error: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/middlewared/job.py", line 411, in run await self.future File "/usr/lib/python3/dist-packages/middlewared/job.py", line 446, in __run_body rv = await self.method(*([self] + args)) File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1140, in nf res = await f(*args, **kwargs) File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1272, in nf return await func(*args, **kwargs) File "/usr/lib/python3/dist-packages/middlewared/plugins/kubernetes_linux/update.py", line 288, in do_update await self.middleware.call('kubernetes.status_change') File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1345, in call return await self._call( File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1305, in _call return await self.run_in_executor(prepared_call.executor, methodobj, *prepared_call.args) File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1206, in run_in_executor return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs)) File "/usr/lib/python3.9/concurrent/futures/thread.py", line 52, in run result = self.fn(*self.args, **self.kwargs) File "/usr/lib/python3/dist-packages/middlewared/plugins/kubernetes_linux/lifecycle.py", line 236, in status_change self.middleware.call_sync('kubernetes.status_change_internal') File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1364, in call_sync return self.run_coroutine(methodobj(*prepared_call.args)) File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1404, in run_coroutine return fut.result() File "/usr/lib/python3.9/concurrent/futures/_base.py", line 433, in result return self.__get_result() File "/usr/lib/python3.9/concurrent/futures/_base.py", line 389, in __get_result raise self._exception File "/usr/lib/python3/dist-packages/middlewared/plugins/kubernetes_linux/lifecycle.py", line 251, in status_change_internal await self.middleware.call('container.image.load_default_images') File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1345, in call return await self._call( File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1294, in _call return await methodobj(*prepared_call.args) File "/usr/lib/python3/dist-packages/middlewared/plugins/docker_linux/images.py", line 171, in load_default_images await self.load_images_from_file(DEFAULT_DOCKER_IMAGES_PATH) File "/usr/lib/python3/dist-packages/middlewared/plugins/docker_linux/images.py", line 155, in load_images_from_file await self.docker_checks() File "/usr/lib/python3/dist-packages/middlewared/plugins/docker_linux/images.py", line 176, in docker_checks raise CallError('Docker service is not running') middlewared.service_exception.CallError: [EFAULT] Docker service is not running

How can I fix this without reinstalling the system?
 
Last edited:

jeroenst

Cadet
Joined
Oct 19, 2022
Messages
6
More log from shell:
# service docker start
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xe" for details.


# systemctl status docker.service
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/docker.service.d
└─override.conf
Active: failed (Result: exit-code) since Thu 2023-04-13 10:02:08 CEST; 19s ago
TriggeredBy: ● docker.socket
Docs: https://docs.docker.com
Process: 42628 ExecStart=/usr/bin/dockerd -H fd:// (code=exited, status=1/FAILURE)
Main PID: 42628 (code=exited, status=1/FAILURE)
CPU: 228ms

Apr 13 10:02:08 NAS01 systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
Apr 13 10:02:08 NAS01 systemd[1]: Stopped Docker Application Container Engine.
Apr 13 10:02:08 NAS01 systemd[1]: docker.service: Start request repeated too quickly.
Apr 13 10:02:08 NAS01 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 13 10:02:08 NAS01 systemd[1]: Failed to start Docker Application Container Engine.
Apr 13 10:02:22 NAS01 systemd[1]: docker.service: Start request repeated too quickly.
Apr 13 10:02:22 NAS01 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 13 10:02:22 NAS01 systemd[1]: Failed to start Docker Application Container Engine.
root@NAS01[~]#
 

jeroenst

Cadet
Joined
Oct 19, 2022
Messages
6
Even more log:

# journalctl -xe
Apr 13 10:04:10 NAS01 dockerd[44983]: time="2023-04-13T10:04:10.440278372+02:00" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Apr 13 10:04:10 NAS01 dockerd[44983]: time="2023-04-13T10:04:10.440368237+02:00" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Apr 13 10:04:10 NAS01 dockerd[44983]: time="2023-04-13T10:04:10.440424705+02:00" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Apr 13 10:04:10 NAS01 dockerd[44983]: time="2023-04-13T10:04:10.440696847+02:00" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Apr 13 10:04:10 NAS01 dockerd[44983]: time="2023-04-13T10:04:10.440721629+02:00" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Apr 13 10:04:10 NAS01 dockerd[44983]: time="2023-04-13T10:04:10.440767751+02:00" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Apr 13 10:04:10 NAS01 dockerd[44983]: time="2023-04-13T10:04:10.440792480+02:00" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Apr 13 10:04:10 NAS01 dockerd[44983]: time="2023-04-13T10:04:10.440813521+02:00" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Apr 13 10:04:10 NAS01 dockerd[44983]: time="2023-04-13T10:04:10.440835967+02:00" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Apr 13 10:04:10 NAS01 dockerd[44983]: time="2023-04-13T10:04:10.440868881+02:00" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Apr 13 10:04:10 NAS01 dockerd[44983]: time="2023-04-13T10:04:10.440889789+02:00" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Apr 13 10:04:10 NAS01 dockerd[44983]: time="2023-04-13T10:04:10.440905386+02:00" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Apr 13 10:04:10 NAS01 dockerd[44983]: time="2023-04-13T10:04:10.440919145+02:00" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Apr 13 10:04:10 NAS01 dockerd[44983]: time="2023-04-13T10:04:10.440932931+02:00" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Apr 13 10:04:10 NAS01 dockerd[44983]: time="2023-04-13T10:04:10.440975381+02:00" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Apr 13 10:04:10 NAS01 dockerd[44983]: time="2023-04-13T10:04:10.440992630+02:00" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Apr 13 10:04:10 NAS01 dockerd[44983]: time="2023-04-13T10:04:10.441006963+02:00" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Apr 13 10:04:10 NAS01 dockerd[44983]: time="2023-04-13T10:04:10.441020063+02:00" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Apr 13 10:04:10 NAS01 dockerd[44983]: time="2023-04-13T10:04:10.441214834+02:00" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Apr 13 10:04:10 NAS01 dockerd[44983]: time="2023-04-13T10:04:10.441287688+02:00" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Apr 13 10:04:10 NAS01 dockerd[44983]: time="2023-04-13T10:04:10.441355387+02:00" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Apr 13 10:04:10 NAS01 dockerd[44983]: time="2023-04-13T10:04:10.441379998+02:00" level=info msg="containerd successfully booted in 0.037024s"
Apr 13 10:04:10 NAS01 dockerd[44974]: time="2023-04-13T10:04:10.450614283+02:00" level=info msg="parsed scheme: \"unix\"" module=grpc
Apr 13 10:04:10 NAS01 dockerd[44974]: time="2023-04-13T10:04:10.450651575+02:00" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Apr 13 10:04:10 NAS01 dockerd[44974]: time="2023-04-13T10:04:10.450680400+02:00" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Apr 13 10:04:10 NAS01 dockerd[44974]: time="2023-04-13T10:04:10.450700095+02:00" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Apr 13 10:04:10 NAS01 dockerd[44974]: time="2023-04-13T10:04:10.451455335+02:00" level=info msg="parsed scheme: \"unix\"" module=grpc
Apr 13 10:04:10 NAS01 dockerd[44974]: time="2023-04-13T10:04:10.451486626+02:00" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Apr 13 10:04:10 NAS01 dockerd[44974]: time="2023-04-13T10:04:10.451513520+02:00" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Apr 13 10:04:10 NAS01 dockerd[44974]: time="2023-04-13T10:04:10.451543680+02:00" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Apr 13 10:04:10 NAS01 dockerd[44974]: time="2023-04-13T10:04:10.453732580+02:00" level=error msg="failed to mount overlay: invalid argument" storage-driver=overlay2
Apr 13 10:04:10 NAS01 dockerd[44974]: time="2023-04-13T10:04:10.453770251+02:00" level=error msg="[graphdriver] prior storage driver overlay2 failed: driver not supported"
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Make sure you have a config backup, reinstall and restore the backup.
 

neofusion

Contributor
Joined
Apr 2, 2022
Messages
159
I upgraded to bluefin (22.12.2) when all the issues started, employees coudn't access samba shares anymore, graphs where broken and docker images failed to start making the software unusable.
A common reason for Apps failing post Bluefin upgrade is that the app in question accesses a file path that is also shared with SMB.
If that sounds familiar I recommend you read up on HostPath Validation and consider what it means in your setup. There are also multiple posts in the SCALE forum related to the topic.
 
Top