Gents, after the morning coffee session I have done my last target within SCALE RC:
Direct connection to the k3s master node with Portainer with the full control.
What it is meaning:
- full and comfortable control of all k3s aspects
- you can definitely forget for the APPs GUI what is in the current stage out of the useful range
- don't need to spend time to create a tune of the GUI - it is a wasting of a time when there is something excellent.
How to do it:
1.
Use YAML manifest deployment & Expose via Node port
2. You can
download the original YAML file from the official source:
https://raw.githubusercontent.com/portainer/k8s/master/deploy/manifests/portainer/portainer.yaml
3. You need to
get the name of the master node from your Scale RC:
Code:
k3s kubectl get nodes --show-labels
what is 'ix-truenas' in the default setup
4.
Edit the YAML file (# represents a line number in the YAML file):
#115 nodeSelector:
#116 {}
to new:
#115 nodeSelector: kubernetes.io/hostname: <node label from the step n. 3>
#116 {} <-------- this line you can delete
save the file e.g. /temp/portainer.yaml
5. Deployment:
Code:
k3s kubectl apply -f /tmp/portainer.yaml
you will get:
namespace/portainer created
serviceaccount/portainer-sa-clusteradmin created
persistentvolumeclaim/portainer created
clusterrolebinding.rbac.authorization.k8s.io/portainer created
service/portainer created
and this error:
error: error parsing /tmp/portainer.yaml: error converting YAML to JSON: yaml: line 26: mapping values are not allowed in this context
I don't know why ... because the line number content was untouched by me (a new ticket for iX). Because this line content is about:
volume.alpha.kubernetes.io/storage-class: "generic"
I got stuck here for a while. So time warp to the next step (an explanation in the bottom line of this post):
just continue with Portainer official deployment link (doesn't contain the master node name):
Code:
k3s kubectl apply -n portainer -f https://raw.githubusercontent.com/portainer/k8s/master/deploy/manifests/portainer/portainer.yaml
you will get:
namespace/portainer unchanged
serviceaccount/portainer-sa-clusteradmin unchanged
persistentvolumeclaim/portainer unchanged
clusterrolebinding.rbac.authorization.k8s.io/portainer unchanged
service/portainer unchanged
deployment.apps/portainer created
here is the magic (last row):
deployment.apps/portainer created
because when you use next command for a proof:
Code:
k3s kubectl get pods --all-namespaces
you will get:
NAMESPACE NAME READY STATUS RESTARTS AGE
portainer portainer-dcd599f8f-6gkl6 1/1 Running 3 1m
the list is filtered, I have several pods there already.
6. Open your browser and use:
http://ip:port
where IP is:
TrueNAS Scale host IP exposed to the LAN
or FQDN follow your setup ( I have Nginx Reverse Proxy in my existing infra)
where port is:
30779 - for https
30777 - for http
when you have RP as me, there is another way, without port number
define your admin usr/psw and ENJOY!
-------------------------------------------------------------
Bottom line
There was one of the tests scenarios when I tried to find a solution to run Portainer in the Scale POD:
When I tried only the original YAML from Portainer, I got an Portainer POD, but in the PENDING (scheduling) stage. I found a reason:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 93m default-scheduler 0/1 nodes are available: 1 node(s) didn't match Pod's node affinity/selector.
and the NodeSelector for the 'kubernetes.io/hostname=' contained an empty (value) ... ofc people from the Portainer don't know the value, what is necessary to use within the YAML.
OFC, in the Portainer deployment documentation you can find a patch for it:
Code:
kubectl patch deployments -n portainer portainer -p '{"spec": {"template": {"spec": {"nodeSelector": {"kubernetes.io/hostname": "'$(kubectl get pods -n portainer -o jsonpath='{ ..nodeName }')'"}}}}}' || (echo Failed to identify current node of portainer pod; exit 1)
what doesn't work even when you correctly tune the script in both 'kubectl' commands used to correct 'k3s kubectl'... I would like to know the reason for the 'k3s kubectl' instead of the standard used as 'kubectl' command convention in the SCALE.
The Portainer POD wasn't still available.
Hm.
Checked a taint to tell the node that it's allowed to run pods :
Code:
kubectl taint nodes --all node-role.kubernetes.io/master-
I got her an error:
error: taint "node-role.kubernetes.io/master" not found
So I checked taints:
Code:
k3s kubectl describe node ix-truenas | grep Taints
The reason, why was tested this scenario is based on official
Kubernetes documentation:
By default, your cluster will not schedule Pods on the control-plane node for security reasons. If you want to be able to schedule Pods on the control-plane node, for example for a single-machine Kubernetes cluster for development, run:
Code:
kubectl taint nodes --all node-role.kubernetes.io/master-
Don't do it.
Some screenshots:
take it just a taster.
So my final stage is now:
I have one new host for containers operation - TrueNAS Scale RC-1-2:
1. I can run there fully managed Docker Swarm with all the added value from it. Thx to the Portainer CE hosted from another host (also from the Truenas when I need).
2. Fully managed k3s node thx to the Portainer CE running in the SCALE NODE.
I can deploy any docker container from Docker hub, then I have the FULL CONTROL about it and no one will put me unclear container sources.
I can deploy any charts, then I have FULL CONTROL about it and no one will put me unclear chart sources with prerdefined usr/psw, ... .
Why this is not defined as target architecture of the SCALE? As kind of professional solution? It is works.