Waiting for pods to be scaled to 0 replica(s)

PyCoder

Dabbler
Joined
Nov 5, 2019
Messages
30
Hi guys

I've an issue with the apps!
Everything worked fine for days and today I made a reboot and now none of my dockers want to start!
They are stuck with the message "Waiting for pods to be scaled to 0 replica(s)" in the tasks-menu (top right) and I've the message for every single docker:

2022-03-19 20:05:48
Error: Error response from daemon: exit status 2: "/usr/sbin/zfs fs snapshot deadpool/ix-applications/docker/03db9f16b62cec68b6b73908cebedd6c400b8e3a9fc77024a6b96ba0886f2c83@538055343" => cannot open 'deadpool/ix-applications/docker/03db9f16b62cec68b6b73908cebedd6c400b8e3a9fc77024a6b96ba0886f2c83': dataset does not exist usage: snapshot [-r] [-o property=value] ... <filesystem|volume>@<snap> ... For the property list, run: zfs set|get For the delegated permission list, run: zfs allow|unallow

Can someone explain me how to fix it?

I really don't want to redeploy 40 docker without having an auto-config-restore like as example unraid. :(
 
Last edited:

bsaurusrex

Cadet
Joined
Feb 26, 2022
Messages
7
Failed to pull image "linuxserver/sonarr:latest": rpc error: code = Unknown desc = failed to register layer: exit status 2: "/usr/sbin/zfs fs snapshot ssd01/ix-applications/docker/ae1aa2dc7909fec431b2b64f2e23551dd88661c2772f33b86cdd065d985c2f61@920772100" => cannot open 'ssd01/ix-applications/docker/ae1aa2dc7909fec431b2b64f2e23551dd88661c2772f33b86cdd065d985c2f61': dataset does not exist usage: snapshot [-r] [-o property=value] ... <filesystem|volume>@<snap> ... For the property list, run: zfs set|get For the delegated permission list, run: zfs allow|unallow

Except, I cant deploy a new truechart or deploy a docker so I cant even recreate.
 

PyCoder

Dabbler
Joined
Nov 5, 2019
Messages
30
in the meanwhile I redeployed everything again but I also googled for it... apparently a lot of people have sometimes this issue with zfs and docker.

I had the same on unraid with zfs thats why I went with a zvol with ext4 at the end for docker but i thought that truenas figured out away around this zfs / docker issues. :(

Apparently the only solution is to zfs destroy all snapshots and clones and ix-applications and start from scratch :(

I have also a strange issue that my "backup pool" is waking up all the time for no reason :(
 
Last edited:

bsaurusrex

Cadet
Joined
Feb 26, 2022
Messages
7
I tried to do a format and restore. Didn't help at all.

Eventually,
unset the pool
systemctl stop docker
systemctl stop k3s
zfs destroy ssd01/ix-applications -r
systemctl start docker
systemctl start k3s
re-set the pool
manually re-setup every application :mad:

I am sure there's a better way or some file inside ix-applications to update.
Will be simply implementing a config folder mapping for all applications to be outside the ix-application folder. Then I can easily re-spawn and pull the same source cfg/db. Live and learn.

Oddly, now I have a bunch of random datasets on the root

ssd01/2729ce872ab7c1718d88989dd7426c43987f8b0248d34750426b8e6796f05bb7 80K 863G 680K legacy
ssd01/2729ce872ab7c1718d88989dd7426c43987f8b0248d34750426b8e6796f05bb7-init 208K 863G 680K legacy
ssd01/480393665d669fee377b656d628486a9dce27add8cffc3c50cf760e4a9b7d9bd 10.5M 863G 17.1M legacy
ssd01/790f31ebdd68ac9680df3cc7b4eb30291ac1db5bde20f93d07deb44549a74706 6.79M 863G 6.79M legacy
ssd01/7e32ab1667aafb41f7d4934c740e25f048f8a53a1d340c8164c3325c7cc95b4d 632K 863G 632K legacy
ssd01/905d384fae5a8c1cbb0a534c28287dcf164b2b4f5a7a318d3e1086ef0b9db592 80K 863G 680K legacy
ssd01/905d384fae5a8c1cbb0a534c28287dcf164b2b4f5a7a318d3e1086ef0b9db592-init 208K 863G 680K legacy

ssd01/ix-applications 1.15G 863G 300K /mnt/ssd01/ix-applications
 
Last edited:

LKB

Dabbler
Joined
Dec 29, 2016
Messages
15
Hi,

have you found any solutions?

I have the same problem with just one app: Nextcloud official. if I try to deploy It remains stuck with this message:

1654594788656.png


LK
 

Lawris

Cadet
Joined
Jun 12, 2022
Messages
2
Hi,

have you found any solutions?

I have the same problem with just one app: Nextcloud official. if I try to deploy It remains stuck with this message:

View attachment 55930

LK
Hi, I have having the EXACT same problem, I really don’t know what to do about it, I googled it for quite some time yesterday. Did you find any solution to that?

If I don’t manage to fix this I might just go for a VM lol

Cheers.
 

PyCoder

Dabbler
Joined
Nov 5, 2019
Messages
30
I found a solution!

As example my Radarr container is broken:

1) Copy the number from the path (11239550d17da5b68768b4d383bf63f9f2b285fcad38af6be0405deb5e) (See the screenshot below)
Screenshot from 2022-06-13 06-55-01.png


2) zpool history | grep 11239550d17da5b68768b4d383bf63f9f2b285fcad38af6be0405deb5e

Output:
Code:
root@truenas[~]# zpool history | grep 11239550d17da5b68768b4d383bf63f9f2b285fcad38af6be0405deb5e
2022-05-24.13:16:01 zfs clone -p -o mountpoint=legacy deadpool/ix-applications/docker/bc421b0a4bb243572eb80664a6e51530429e6220c0ac59a505eab3f5bb31879c@659628406 deadpool/ix-applications/docker/11239550d17da5b68768b4d383bf63f9f2b285fcad38af6be0405deb5ee39cc3
2022-05-24.13:16:19 zfs snapshot deadpool/ix-applications/docker/11239550d17da5b68768b4d383bf63f9f2b285fcad38af6be0405deb5ee39cc3@314381588


3) Copy the whole number without the part behind @

4) Re-create the data-set
Code:
root@truenas[~]# zfs create deadpool/ix-applications/docker/11239550d17da5b68768b4d383bf63f9f2b285fcad38af6be0405deb5ee39cc3


5) Start your "broken" container
Screenshot from 2022-06-13 07-04-04.png
 
Last edited:

ammagdy

Cadet
Joined
Nov 7, 2022
Messages
1
I found a solution!

As example my Radarr container is broken:

1) Copy the number from the path (11239550d17da5b68768b4d383bf63f9f2b285fcad38af6be0405deb5e) (See the screenshot below)
View attachment 56108

2) zpool history | grep 11239550d17da5b68768b4d383bf63f9f2b285fcad38af6be0405deb5e

Output:
Code:
root@truenas[~]# zpool history | grep 11239550d17da5b68768b4d383bf63f9f2b285fcad38af6be0405deb5e
2022-05-24.13:16:01 zfs clone -p -o mountpoint=legacy deadpool/ix-applications/docker/bc421b0a4bb243572eb80664a6e51530429e6220c0ac59a505eab3f5bb31879c@659628406 deadpool/ix-applications/docker/11239550d17da5b68768b4d383bf63f9f2b285fcad38af6be0405deb5ee39cc3
2022-05-24.13:16:19 zfs snapshot deadpool/ix-applications/docker/11239550d17da5b68768b4d383bf63f9f2b285fcad38af6be0405deb5ee39cc3@314381588


3) Copy the whole number without the part behind @

4) Re-create the data-set
Code:
root@truenas[~]# zfs create deadpool/ix-applications/docker/11239550d17da5b68768b4d383bf63f9f2b285fcad38af6be0405deb5ee39cc3


5) Start your "broken" container
View attachment 56109
Hi, I've tried it and didn't work for me saying" dataset already exists", is their another solution or any one found a fix
 

PyCoder

Dabbler
Joined
Nov 5, 2019
Messages
30
If it still exists there must be something else buggy?

TBH I stopped using TrueNAS Scale.

Too many bugs and too many features missing.

I'll wait till it's production ready and in the meantime I stick to unraid + zfs master
 
Top