I want to use native k8s CronJob to implement the DDNS task, below is my YAML, but there is always a problem, someone help me see what the problem is?
k3s version v1.23.5+k3s-fbfa51e5-dirty (fbfa51e5)
go version go1.17.8
os version TrueNAS-SCALE-22.02.2.1
k3s kubectl describe cronjob.batch/cloudflare-updater
k3s kubectl create job --from=cronjob/cloudflare-updater cloudflare-updater
k3s kubectl get all
k3s kubectl describe pod/cloudflare-updater-wdcrs
Maybe it could be a problem with K3S? Can someone try this task on their own server to be successful?
k3s version v1.23.5+k3s-fbfa51e5-dirty (fbfa51e5)
go version go1.17.8
os version TrueNAS-SCALE-22.02.2.1
Code:
# configmap.yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
name: cloudflare-updater
namespace: default
data:
cloudflare-updater.sh: |
#!/usr/bin/env bash
set -o nounset
set -o errexit
current_ipv4="$(curl -s https://ipv4.icanhazip.com/)"
zone_id=$(curl -s -X GET \
"https://api.cloudflare.com/client/v4/zones?name=${DOMAIN#*.}&status=active" \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
| jq --raw-output ".result[0] | .id"
)
record_ipv4=$(curl -s -X GET \
"https://api.cloudflare.com/client/v4/zones/${zone_id}/dns_records?name=${DOMAIN}&type=A" \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
)
old_ip4=$(echo "$record_ipv4" | jq --raw-output '.result[0] | .content')
if [[ "${current_ipv4}" == "${old_ip4}" ]]; then
printf "%s - IP Address '%s' has not changed" "$(date -u)" "${current_ipv4}"
exit 0
fi
record_ipv4_identifier="$(echo "$record_ipv4" | jq --raw-output '.result[0] | .id')"
update_ipv4=$(curl -s -X PUT \
"https://api.cloudflare.com/client/v4/zones/${zone_id}/dns_records/${record_ipv4_identifier}" \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
--data "{\"id\":\"${zone_id}\",\"type\":\"A\",\"proxied\":false,\"name\":\"${DOMAIN}\",\"content\":\"${current_ipv4}\"}" \
)
if [[ "$(echo "$update_ipv4" | jq --raw-output '.success')" == "true" ]]; then
printf "%s - Success - IP Address '%s' has been updated" "$(date -u)" "${current_ipv4}"
exit 0
else
printf "%s - Yikes - Updating IP Address '%s' has failed" "$(date -u)" "${current_ipv4}"
exit 1
fi
# secret.yaml
---
apiVersion: v1
kind: Secret
metadata:
name: cloudflare-token
namespace: default
type: Opaque
data:
# Your domain name in base64
DOMAIN: fWataR3=
# You API token in base64
# Get token from https://dash.cloudflare.com/profile/api-tokens
TOKEN: fWataR3=
# cronjob.yaml
---
apiVersion: batch/v1
kind: CronJob
metadata:
name: cloudflare-updater
namespace: default
spec:
# At every hours.
schedule: "0 * * * *"
failedJobsHistoryLimit: 5
successfulJobsHistoryLimit: 3
concurrencyPolicy: Forbid
jobTemplate:
spec:
backoffLimit: 3
ttlSecondsAfterFinished: 300
template:
spec:
restartPolicy: Never
containers:
- name: cloudflare-updater
image: ghcr.io/k8s-at-home/kubectl:v1.23.5
imagePullPolicy: IfNotPresent
envFrom:
- secretRef:
name: cloudflare-token
command:
- "/bin/sh"
- "-c"
- "/app/cloudflare-updater.sh"
volumeMounts:
- name: cloudflare-updater
mountPath: /app/cloudflare-updater.sh
subPath: cloudflare-updater.sh
readOnly: true
volumes:
- name: cloudflare-updater
projected:
defaultMode: 0755
sources:
- configMap:
name: cloudflare-updater
items:
- key: cloudflare-updater.sh
path: cloudflare-updater.shk3s kubectl describe cronjob.batch/cloudflare-updater
Code:
Name: cloudflare-updater
Namespace: default
Labels: <none>
Annotations: <none>
Schedule: 0 * * * *
Concurrency Policy: Forbid
Suspend: False
Successful Job History Limit: 3
Failed Job History Limit: 5
Starting Deadline Seconds: <unset>
Selector: <unset>
Parallelism: <unset>
Completions: <unset>
Pod Template:
Labels: <none>
Containers:
cloudflare-updater:
Image: ghcr.io/k8s-at-home/kubectl:v1.23.5
Port: <none>
Host Port: <none>
Command:
/bin/sh
-c
/app/cloudflare-updater.sh
Environment Variables from:
cloudflare-token Secret Optional: false
Environment: <none>
Mounts:
/app/cloudflare-updater.sh from cloudflare-updater (ro,path="cloudflare-updater.sh")
Volumes:
cloudflare-updater:
Type: Projected (a volume that contains injected data from multiple sources)
ConfigMapName: cloudflare-updater
ConfigMapOptional: <nil>
Last Schedule Time: Sun, 10 Jul 2022 21:00:00 +0800
Active Jobs: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 27m cronjob-controller Created job cloudflare-updater-27624300
Normal SawCompletedJob 27m cronjob-controller Saw completed job: cloudflare-updater-27624300, status: Failed
k3s kubectl create job --from=cronjob/cloudflare-updater cloudflare-updater
k3s kubectl get all
Code:
pod/cloudflare-updater-ng4rv 0/1 Error 0 35s pod/cloudflare-updater-jzdt9 0/1 Error 0 31s pod/cloudflare-updater-dc6qf 0/1 Error 0 28s pod/cloudflare-updater-wdcrs 0/1 Error 0 24s
k3s kubectl describe pod/cloudflare-updater-wdcrs
Code:
Name: cloudflare-updater-wdcrs
Namespace: default
Priority: 0
Node: ix-truenas/10.0.0.10
Start Time: Sun, 10 Jul 2022 21:30:03 +0800
Labels: controller-uid=dac2fa2a-eee7-4fa8-b380-1fc16d8334df
job-name=cloudflare-updater
Annotations: k8s.v1.cni.cncf.io/network-status:
[{
"name": "ix-net",
"interface": "eth0",
"ips": [
"172.16.1.169"
],
"mac": "8a:29:b1:56:1a:6a",
"default": true,
"dns": {}
}]
k8s.v1.cni.cncf.io/networks-status:
[{
"name": "ix-net",
"interface": "eth0",
"ips": [
"172.16.1.169"
],
"mac": "8a:29:b1:56:1a:6a",
"default": true,
"dns": {}
}]
Status: Failed
IP: 172.16.1.169
IPs:
IP: 172.16.1.169
Controlled By: Job/cloudflare-updater
Containers:
cloudflare-updater:
Container ID: docker://b194a0dafc0684d0ade8b4852c06c2ee5142a4b87b72c0671e2821b8b147aa97
Image: ghcr.io/k8s-at-home/kubectl:v1.23.5
Image ID: docker-pullable://ghcr.io/k8s-at-home/kubectl@sha256:53b37dbf69bc9edf6b43f4f95049b85e233f7055d8e220ef030a337ef9d93dc0
Port: <none>
Host Port: <none>
Command:
/bin/sh
-c
/app/cloudflare-updater.sh
State: Terminated
Reason: Error
Exit Code: 3
Started: Sun, 10 Jul 2022 21:30:05 +0800
Finished: Sun, 10 Jul 2022 21:30:06 +0800
Ready: False
Restart Count: 0
Environment Variables from:
cloudflare-token Secret Optional: false
Environment: <none>
Mounts:
/app/cloudflare-updater.sh from cloudflare-updater (ro,path="cloudflare-updater.sh")
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vls8j (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
cloudflare-updater:
Type: Projected (a volume that contains injected data from multiple sources)
ConfigMapName: cloudflare-updater
ConfigMapOptional: <nil>
kube-api-access-vls8j:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 66s default-scheduler Successfully assigned default/cloudflare-updater-wdcrs to ix-truenas
Normal AddedInterface 66s multus Add eth0 [172.16.1.169/16] from ix-net
Normal Pulled 66s kubelet Container image "ghcr.io/k8s-at-home/kubectl:v1.23.5" already present on machine
Normal Created 65s kubelet Created container cloudflare-updater
Normal Started 65s kubelet Started container cloudflare-updater
Warning FailedMount 62s (x3 over 63s) kubelet MountVolume.SetUp failed for volume "cloudflare-updater" : object "default"/"cloudflare-updater" not registered
Maybe it could be a problem with K3S? Can someone try this task on their own server to be successful?