Hello, This is ongoing on every release of scale that after reboot apps have 2 pods on workloads, from which 1 only working .
I run manual sudo k3s kubectl get pod --all-namespaces | awk '{if ($4 != "Running") system ("sudo k3s kubectl -n " $1 " delete pods " $2 " --grace-period=0 " " --force ")}' and that gets rid of the ghost and goes back to normal 1 POD.
Is this an ongoing bug? Are there any workaround to fix it?
i run this command as post init script with a delay of 300
Thanks
Dinos
I run manual sudo k3s kubectl get pod --all-namespaces | awk '{if ($4 != "Running") system ("sudo k3s kubectl -n " $1 " delete pods " $2 " --grace-period=0 " " --force ")}' and that gets rid of the ghost and goes back to normal 1 POD.
Is this an ongoing bug? Are there any workaround to fix it?
i run this command as post init script with a delay of 300
Thanks
Dinos
Last edited: