How to run a Docker container on SCALE for dummies?

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
The output of docker ps is empty. The rest I can check in a couple of minutes.

As for the localhost - I could do that via ssh port forwarding - or even with curl on the TrueNAS host. My intention was to follow the recipe until I get that working, then worry about ingress.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Code:
truenas# kubectl get pods -A -o wide
NAMESPACE     NAME                          READY   STATUS    RESTARTS   AGE     IP       NODE     NOMINATED NODE   READINESS GATES
kube-system   openebs-zfs-controller-0      0/5     Pending   0          7h25m   <none>   <none>   <none>           <none>
kube-system   coredns-66c464876b-hsz46      0/1     Pending   0          7h25m   <none>   <none>   <none>           <none>
default       onlyoffice-57c7b978d7-42q6g   0/1     Pending   0          6h51m   <none>   <none>   <none>           <none>
truenas# kubectl get svc -A -o wide
NAMESPACE     NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                  AGE     SELECTOR
default       kubernetes   ClusterIP   172.17.0.1     <none>        443/TCP                  7h25m   <none>
kube-system   kube-dns     ClusterIP   172.17.0.10    <none>        53/UDP,53/TCP,9153/TCP   7h25m   k8s-app=kube-dns
default       onlyoffice   ClusterIP   172.17.36.15   <none>        80/TCP                   6h51m   app.kubernetes.io/instance=onlyoffice,app.kubernetes.io/name=onlyoffice


And now I have something - after I logged in to the UI ...
CRITICAL
Failed to start kubernetes cluster for Applications: [EFAULT] Unable to locate kube-router routing table. Please refer to kuberouter logs.
2020-10-25 05:48:19 (America/Los_Angeles)

I'll go search for those logs ...
 
Last edited:

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Somehow the network setup of my TrueNAS system was incomplete - no default gateway. I had fixed that during my experiments with Helm, because it complained when trying to download. But it seems somehow the kube-router got confused.

A reboot fixed most of the problems. After the reboot my nameserver was missing from the config. Added that back, another reboot, now my state is this:
Code:
NAMESPACE     NAME                          READY   STATUS             RESTARTS   AGE   IP              NODE         NOMINATED NODE   READINESS GATES
kube-system   openebs-zfs-node-ddrks        2/2     Running            2          24m   192.168.93.11   ix-truenas   <none>           <none>
kube-system   openebs-zfs-controller-0      5/5     Running            5          8h    172.16.0.6      ix-truenas   <none>           <none>
default       onlyoffice-57c7b978d7-mw2qp   1/1     Running            0          10m   172.16.0.7      ix-truenas   <none>           <none>
kube-system   coredns-66c464876b-hsz46      0/1     CrashLoopBackOff   12         8h    172.16.0.5      ix-truenas   <none>           <none>
truenas# kubectl get svc -A -o wide 
NAMESPACE     NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                  AGE   SELECTOR
default       kubernetes   ClusterIP   172.17.0.1     <none>        443/TCP                  8h    <none>
kube-system   kube-dns     ClusterIP   172.17.0.10    <none>        53/UDP,53/TCP,9153/TCP   8h    k8s-app=kube-dns
default       onlyoffice   ClusterIP   172.17.62.97   <none>        80/TCP                   10m   app.kubernetes.io/instance=onlyoffice,app.kubernetes.io/name=onlyoffice


That CrashLoopBackOff looks curious, though. Is there a simple way to wipe "everything kubernetes" and start over?
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
I am making some progress, but now I think I need a nudge in the right direction, again.

Per the developer notes I did midclt call -job kubernetes.update '{"pool": "ssd"}' and added the suggested shell aliases.

The suggestion from developers is that if you are installing a docker-like environment yourself, its much easier to install docker and portainer (not k8s) in the short term. Just start docker with an init-script.

When K8s is included in the SCALE 20.12 with a UI, it won't require nearly as much k8s skills and experience. It will provide a simpler docker-like experience, but with way more capabilities for real applications ( pods, not just single containers).

We'll let everyone know when k8s UI lands in the NIGHTLY train. It may still have bugs, but it will be much easier to deploy and manage.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
I'll wait until December, then. In the meantime my docker compose VMs are running fine on TN Core.

Thanks!
 

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
@Patrick M. Hausen Don't know if there's midclt command you could use to start kubernetes from scratch. Otherwise all I know is cleaning up junk ( svc, pod, deployments etc) by using the appropriate kubectl delete command. I wonder which curl commnd you'd use as a test, would you expect "curl -Is" to return anything useful. It's end of the day for me. PS I couldn't figure out howto get k3-agent to run on a separate host and connect to the cluster as another node.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
I wonder which curl commnd you'd use as a test, would you expect "curl -Is" to return anything useful.
I was referring to the fact that according to the recipe the Onlyoffice container should end up reachable via 127.0.0.1 on the Docker host, if I am not mistaken. I planned to strictly follow that recipe until I get a listening service on 127.0.0.1:9080 and could use curl (I guess, would use fetch on FreeBSD) to retrieve http://127.0.0.1:9080/welcome/ which should deliver a well formed HTML page containing the string "Thank you for choosing ONLYOFFICE!".

Then I would have started to research how to do ingress with Helm to get the service published on my LAN.
Then I would have researched how to get the JWT_SECRET in there with Helm.
Then I would have thought about changing my reverse SSL proxy and setting the container productive.

First things first ;)

Kind regards,
Patrick
 

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
Understood. After seeing the last comment from @morganL, I'm not sure I'll make much progress with kubernetes on SCALE now. I can revert to my working simple k3s cluster based on three debian VMs. It's containerd based, not docker, and uses flannel for networking and a straightforward NFS share for the cluster persistent volume. Still learning, rather slowly these days ....
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
Okey some insight:
You should do this, before installing k8s:

Code:
alias kubectl="k3s kubectl"
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml


Setuping up k8s on a certain pool is super simple (tm):

Code:
midclt call -job kubernetes.update '{"pool": "pool_name_here"}'


Removing it again, is NOT as simple and just shouldn't be done generally speaking.

- It's advicable to move the TrueNAS SCALE gui to a different port. Because some helm charts expect to be able to use the host port 80 or 443.
- Helm charts also almost ALWAYS need you to download values.yaml and add your own config, it's not just adding a link and go (that might work, but often doesn't)
Installing a helm chart, using a custom values.yaml file goes as follows: (unifi controller as an example)

Code:
helm upgrade --install --values unifi/values.yaml unifi ./unifi/chart/


- Some helm charts needs you to add secrets, you can create a secrets.yaml file with secrets like this:

Code:
apiVersion: v1
kind: Secret
metadata:
name: mariadbsecret
type: Opaque
stringData:
rootUser.password: MariaDBTest
galera.mariabackup.password: BackupTest
---
apiVersion: v1
kind: Secret
metadata:
name: nextcloud-db-pass
type: Opaque
stringData:
username: nextcloud
password: NextcloudTest


Installing such a secret.yaml is rather easy:
Code:
kubectl apply -f tautulli/ingress.yaml


If you want to add the stock k8s dashboard for testing purposes:
Code:
kubectl apply -f https://vividcode.io/content/insecure-kubernetes-dashboard.yml
cat <<EOF | kubectl create -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard2
  labels:
    k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
EOF


Keep in mind you still need to add ingress or proxy to above little script

TLDR:
Helm and k8s is not "super easy" and if you have not spend some time exploring k8s outside of SCALE, it might be wise to await the GUI.
 
Last edited:

xioustic

Dabbler
Joined
Sep 4, 2014
Messages
23
Is there a way to rollback / uninstall k8s & kubernetes and friends? I ran `midclt call -job kubernetes.update '{"pool": "pool_name_here"}'

Things started to work per the docs but I am just not a fan of where things are at yet in terms of user friendliness and the complexity kubernetes / k8s adds. I am confident it's going to be great in the future but not until a UI is in place or deploying a basic docker container is made more simple.

I'll move to an Ubuntu VM with Docker/Portainer for now. However, the baggage from the midclt command still exists per `docker ps -a` at boot and I'd like to get rid of it:
Code:
truenas# docker ps -a
CONTAINER ID        IMAGE                                      COMMAND                  CREATED              STATUS              PORTS               NAMES
5179e362f2ac        rancher/coredns-coredns                    "/coredns -conf /etc…"   30 seconds ago       Up 28 seconds                           k8s_coredns_coredns-66c464876b-5h8vp_kube-system_f6b3b585-71b3-4102-a316-bd26e6586f9f_0
61e2a719ffb4        quay.io/k8scsi/csi-node-driver-registrar   "/csi-node-driver-re…"   36 seconds ago       Up 35 seconds                           k8s_csi-node-driver-registrar_openebs-zfs-node-dvg8g_kube-system_f1792966-243f-4bc3-b9bb-99ccca263541_0
eb2270f4c091        rancher/pause:3.1                          "/pause"                 About a minute ago   Up About a minute                       k8s_POD_coredns-66c464876b-5h8vp_kube-system_f6b3b585-71b3-4102-a316-bd26e6586f9f_0
cfc58aa97d70        rancher/pause:3.1                          "/pause"                 About a minute ago   Up About a minute                       k8s_POD_openebs-zfs-node-dvg8g_kube-system_f1792966-243f-4bc3-b9bb-99ccca263541_0
08e23ad44135        rancher/pause:3.1                          "/pause"                 About a minute ago   Up About a minute                       k8s_POD_openebs-zfs-controller-0_kube-system_99461433-1f81-4de3-a880-f101667e6b11_0

Thanks.
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
Is there a way to rollback / uninstall k8s & kubernetes and friends? I ran `midclt call -job kubernetes.update '{"pool": "pool_name_here"}'
That has been asked and answered earlier: No, not solid enough to be relied upon anyway afaik.

Things started to work per the docs but I am just not a fan of where things are at yet in terms of user friendliness and the complexity kubernetes / k8s adds. I am confident it's going to be great in the future but not until a UI is in place or deploying a basic docker container is made more simple.
I'm running compose myself currently, created a config folder with per-container*.env files and docker-compose.yml.
Just running docker-compose up -d, is enough to start my 40 containers.

I'll move to an Ubuntu VM with Docker/Portainer for now. However, the baggage from the midclt command still exists per `docker ps -a` at boot and I'd like to get rid of it:
In that case you could just as well (and more solidly imho) run TrueNAS Core 12
 

xioustic

Dabbler
Joined
Sep 4, 2014
Messages
23
That has been asked and answered earlier: No, not solid enough to be relied upon anyway afaik.


I'm running compose myself currently, created a config folder with per-container*.env files and docker-compose.yml.
Just running docker-compose up -d, is enough to start my 40 containers.


In that case you could just as well (and more solidly imho) run TrueNAS Core 12

Is docker-compose working well for you? Is it persisting data and using the correct ZFS pool? Where on the filesystem are you storing your compose file(s)? Did you have to do anything special to get it working?

I'm intimately familiar with Docker and docker-compose, less so kubernetes or k8s or k3s or rancher.

I'd like to use the Ubuntu VM with the hopes that transitioning it to the TrueNAS SCALE host will be painless at some point in the near future.
 

oumpa31

Patron
Joined
Apr 7, 2015
Messages
253
Morning guys I just finished throwing together an old machine to start testing TrueNAS Scale to get ready for the switchover. The main thing I run is PLEX but with an addon of Locast2plex. That way I can get my local channels when my antenna is being funky. They have done a lot of work creating a docker container here https://github.com/tgorgdotcom/locast2plex. I know nothing about getting this even loaded into docker let alone Scale. With some help I was able to get it running in a Jail on my current TrueNAS box but I want to go to scale to get the PCIE video card transcoding going.

Could you guys help a newbie out and either point me to a guide on getting this loaded or a tutorial I could follow.
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
Morning guys I just finished throwing together an old machine to start testing TrueNAS Scale to get ready for the switchover. The main thing I run is PLEX but with an addon of Locast2plex. That way I can get my local channels when my antenna is being funky. They have done a lot of work creating a docker container here https://github.com/tgorgdotcom/locast2plex. I know nothing about getting this even loaded into docker let alone Scale. With some help I was able to get it running in a Jail on my current TrueNAS box but I want to go to scale to get the PCIE video card transcoding going.

Could you guys help a newbie out and either point me to a guide on getting this loaded or a tutorial I could follow.
Please make your own topic. What you ask is FAR byond a "for dummies guide", which this topic is about.
 

oumpa31

Patron
Joined
Apr 7, 2015
Messages
253
Please make your own topic. What you ask is FAR byond a "for dummies guide", which this topic is about.

ok thought it would be the correct place because its already build just getting it loaded onto scale was my issue.
 

oumpa31

Patron
Joined
Apr 7, 2015
Messages
253
"just getting it loaded into scale" LoL

I've read the readme and it does not seem to be a "add to docker and go" container.
ya there is one file that needs to be created so you can use your Locast log in. I ssh into that to create that file in the jail I created for it to run on core.
 

j_r0dd

Contributor
Joined
Jan 26, 2015
Messages
134
anyone using a dedicated ssd pool for their containers/pods? would there even be a benefit if the config files and data will still be stored on my main pool of spinning disks?
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
anyone using a dedicated ssd pool for their containers/pods? would there even be a benefit if the config files and data will still be stored on my main pool of spinning disks?
You can also use a special vdev with SSD's for metadata and small blocks and set the directory with application data to a blocksize trashhold above the recordsize... Best of both worlds really ;-)

Though config files aren't read that often, so I doubt you'll gain performance on those. But DB's would definately benefit :)
 

shadofall

Contributor
Joined
Jun 2, 2020
Messages
100
anyone using a dedicated ssd pool for their containers/pods? would there even be a benefit if the config files and data will still be stored on my main pool of spinning disks?

Probably depends on the app. But I'm using two cheap 128gig ssd in a mirror setup since I didn't need a lot of space to store the apps and their configs/and some data, they were as I mentioned cheap. It's really just a little less data writing to my main data array but for my use case I doubt there's any other benefits.
 
Top