Trying to understand Kubernetes implementation and how to use in multiple OSs

PackElend

Explorer
Joined
Sep 23, 2020
Messages
60
Hi there,
I have read the details in Developer's Notes | TrueNAS Documentation Hub but I may dare some ask some additional questions, as I'm not that familiar with Kubernetes as I'm used to docker-compose.

NUMBER 1

As I did a fresh install of SCALE 20.12-ALPHA (Angelfish) I did:​
Code:
truenas# docker --version​
Docker version 19.03.13, build 4484c46d9d​
truenas# docker ps -a​
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?​
that is the
native container services within Debian.
what is activated as soon as a pool for Applications is chosen, see here​
Code:
truenas# docker ps -a
Code:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES​
truenas# docker images​
REPOSITORY TAG IMAGE ID CREATED SIZE​
rancher/coredns-coredns 1.6.9 4e797b323460 10 months ago 43.1MB​
rancher/klipper-lb v0.1.2 897ce3c5fc8f 20 months ago 6.1MB​
rancher/pause 3.1 da86e6ba6ca1 3 years ago 742kB​
so it looks like the standard deployment as described on Container runtimes | Kubernetes:​
This page lists details for using several common container runtimes with Kubernetes, on Linux:​
- CRI-O
- Docker
but on Developer's Notes | TrueNAS Documentation Hub it is said that
The initial implementation of Kubernetes is being done using the K3S software from Rancher (recently acquired by SUSE Linux). This proven software base provides a lightweight Kubernetes implementation with support for the API and ability to cluster instances.
but I do not find anything related to using docker in k3s/README.md at master · k3s-io/k3s (github.com) and Rancher Docs: Installation Options.​
So why are the above-shown docker containers and docker active?​
Btw. Portainer is able to connect to this docker daemon if Portainer is started from CLI as described Deploying on Linux - Documentation (portainer.io). I don't know if that is intended.​


NUMBER 2
How far does
This application is an enhanced helm chart which deploys the application to the TrueNAS SCALE Kubernetes cluster.
deviate from original Helm Charts?​
I studied​
Here is an example of deploying the Plex docker image:​
Code:
midclt call -job chart.release.create '{"catalog": "OFFICIAL", "train": "test", "item": "ix-chart", "values": {"image": {"repository": "plexinc/pms-docker"}, "portForwardingList": [{"containerPort": 32400, "nodePort": 32400}], "volumes": [{"datasetName": "transcode", "mountPath": "/transcode"}, {"datasetName": "config", "mountPath": "/config"}, {"datasetName": "data", "mountPath": "/data"}], "workloadType": "Deployment", "gpuConfiguration": {"nvidia.com/gpu": 1}}, "version": "2010.0.1", "release_name": "plex"}'
Code:
in Deploying Kubernetes Workloads | Developer's Notes | TrueNAS Documentation Hub what basically says, you have to provide all parameters where required: true defined in the catalogue item (docker imagecontainer-based) template given in charts/questions.yaml at master · truenas/charts (github.com), plus the parameters what are in the documentation of the docker container itself.​
Is that correct?​
I do have some applications deployed with docker compose. It looks like I could migrate easily according to Converting docker-compose to a helm chart? - Stack Overflow I simply use Kubernetes + Compose = Kompose, done.​
How do I deploy than on SCALE. As the template explicitly refers to a single docker image container, I only have the option to use Developer's Notes | TrueNAS Documentation Hub, isn't it?​
Any chance to add SCALE charts manually?​
Custom Applications | Developer's Notes | TrueNAS Documentation Hub describes only single docker container deployment.​


NUMBER 3

SCALE allows Kubernetes to be disabled. The user will then have access to the native container services within Debian. This will include Docker, LXC (Q1 2021) or any other Kubernetes distribution. There will be a Container Storage Interface (CSI) that can couple the container services with the SCALE storage capabilities. Users can script these capabilities and then use 3rd-party tools like Portainer to manage them. This approach can be used in SCALE 20.10 and later.​
What is meant by Users can script these capabilities?​
Further up there is said​
TrueNAS SCALE has native host support for container workloads. This is under active development and not at BETA or RELEASE quality.
so I wonder, if I may use Portainer as orchestration manager until the first RELEASE is there.​
How can I connect to the existing Kubernetes integration without shutting down the existing Using Applications | Developer's Notes | TrueNAS Documentation Hub?​
It was easy to Add Local Endpoint - Documentation (portainer.io) in regard to docker but I'm bit lost in regard to Kubernetes.​

NUMBER 4

As SCALE is not production-ready yet, I wondering if there is the possibility to share the cluster node with different OS (Linux).​
As the same hardware is virtualized and kernel of the OS are at least quite similar as I would use Debian Server or Ubuntu Server as the productive OS. In addition, I could install SCALE nightly. At least as long as I don't make use of Docker Privileged or other leveraged functions, it could work in theory.​
- Swarm would me allow storing configuration in a file, if I understand Store configuration data using Docker Configs | Docker Documentation correctly, although Swarm is not applicable here​
- If I understand correctly, all images, containers etc. are stored on the selected pool, so How do I change the Docker image installation directory? - Open Source Projects / DockerEngine - Docker Forums / How do I change the default docker container location? - Stack Overflow would not be necessary on TrueNAS but on the production OS.​
- only network configuration could be challenging but docker compose takes care about it anyway.​
-moving the daemon on the production system as mentioned in dual boot - Can I share docker overlay2 between two host systems? - Ask Ubuntu
but how to solve that using Kubernetes as container runtime? Of course, Kubernetes allows to use different machines but at least two need to online. I could get a little pi to act as High available Kubernetes cluster with single control plane node - DEV Community, what hopefully stores the configuration of all nodes and clustered (getting confused with the wording, see What is a Kubernetes cluster? (redhat.com)).​
In addition, the zfs pool has to be mounted into the productive OS but that is well described, e.g. in Install ZFS File System on Ubuntu 18.04 LTS – Linux Hint.​
Any chance to realise that?​
At the very least, it would help to have all persistent data available on all operating systems.​
The production system would ensure the availability of the apps etc., but I could simply switch to SCALE and check the progress and at least report if I notice something. The final migration could be much easier this way.​
 
Last edited:

PackElend

Explorer
Joined
Sep 23, 2020
Messages
60
got almost forgotten, the docker images, in regard to NUMBER 1:

Code:
truenas# docker images -a
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
quay.io/openebs/zfs-driver           ci                  e1daf8bb6a53        2 months ago        227MB
quay.io/k8scsi/csi-provisioner       v1.6.0              a2ac6956643e        9 months ago        48.3MB
rancher/coredns-coredns              1.6.9               4e797b323460        10 months ago       43.1MB
quay.io/k8scsi/csi-snapshotter       v2.0.1              db8bdb9bb241        12 months ago       46.3MB
quay.io/k8scsi/snapshot-controller   v2.0.1              525889021849        12 months ago       41.4MB
quay.io/k8scsi/csi-resizer           v0.4.0              b9520a8f4c9f        12 months ago       46MB
rancher/klipper-lb                   v0.1.2              897ce3c5fc8f        20 months ago       6.1MB
rancher/pause                        3.1                 da86e6ba6ca1        3 years ago         742kB
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
Firstof:
If you number things, PLEASE ask ONE question per number next time. >.<

NUMBER 1

- If you need to ask how Kubernetes (k3s and/or k8s), interfaces with docker I think thats too much to explain (and kinda offtopic)
- Those containers are running kubernetes providers, such as storage providers (TLDR: if you need to ask, you're prob, not the audience to require to understand it)

NUMBER 2
- It uses a different structure than a helm repository, only uses local dependencies/charts and additional files
- Yes, but the CLI method is mostly just for development and not for general use
- Besides apps you can just deploy using helm from the CLI

NUMERB 3
- That docker is available from the CLI
- The way SCALE hosts native containers, is by turning them into k8s deployments. So thats not fully compatible with Portainer
- Also: with portainer deployments, you might need to set iptables: true in /etc/docker/daemon.json
- Be aware: Docker natively, Portainer and such, are not officially supported. It's possible and not actively prevented, but don't expect support
- I don't get what you meant with using portainer without shutting down.

NUMERB 4
- What cluster? There is no official cluster support yet.
- A container isn't an image, it's also a throw-away thing. The host OS is rather irrelevant. The question you link wasn't answered, because no one cares to answer utterly useless questions that only show a lack of basic understanding of underlaying systems.
- SCALE nightly might break quite often, don't use it for more than testing.
- SCALE uses K8S not Swarm, you can setup swarm, but don't expect it to work or expect support
- The docker system dataset can not be changed, only the pool it's on can. There is also no reason to do so. If you really want to, in theory you can edit /etc/docker/daemon.json, but if you have to ask you really shouldn't


ERGO:
I have the feeling you lack enough basic understanding about the underlaying systems to go out DIY'ing them together.
That being said: If you want to go play with digital playdough, it might be more suited to do so on stock debian. Because most of what you man to do, has nothing to do with SCALE (and I read about 4 different containerisation platforms which are mutually-exclusive in your post, so I have no Idea what the heck you actually want)..

If you want to learn about the basic building blocks of either: Docker, K8S, K3S, Swarm, Compose, Kompose etc. Maybe start asking there, before HEAVILY modifying scale and comming back here complaining things aren't working? Because thats what you did before when you tried to force SCALE to do something it isn't build to do and it didn't work.
 

PackElend

Explorer
Joined
Sep 23, 2020
Messages
60
thx again for the comprehensive answers :grin:

If you number things, PLEASE ask ONE question per number next time. >.<
that it was I intended then it escalated quickly :rolleyes:


- If you need to ask how Kubernetes (k3s and/or k8s), interfaces with docker I think thats too much to explain (and kinda offtopic)
let me ask differently. In principle, Kubernetes can work with any containerization technology, the docs mention contained, CRI-O, Docker but Rancher Docs: K3s - Lightweight Kubernetes mentions only contained (although it works with docker as well).
I'm wondering what used as container runtime?
I guess docker as that would explain the containers and images I found.


- Be aware: Docker natively, Portainer and such, are not officially supported. It's possible and not actively prevented, but don't expect support
not expecting that but at least it should be explained how it could be accessed by them. Trying to deploy them would be trial and error


- I don't get what you meant with using portainer without shutting down.
messing up the existing containerization service


- SCALE uses K8S not Swarm, you can setup swarm, but don't expect it to work or expect support
Referencing Swarm in the spoiler was a mistake, it is not applicable here (yet).
Only wanted to highlight that I can easily convert from Compose (my current container deployment) to Helm Charts and docker would allow being shared between OSs although it would only necessary to share the containers.


A container isn't an image,...
that is clear to me, same for volumes etc.


Because most of what you man to do, has nothing to do with SCALE (and I read about 4 different containerisation platforms which are mutually-exclusive in your post, so I have no Idea what the heck you actually want)..
making use of the underlying features of TrueNAS but beyond files sharing, I run a dozen+ of apps (home assistant, nextcloud, wallabag, Joplin, ...) , which are containerized. Would love to move them to TrueNAS SCALE.

before HEAVILY modifying scale and comming back here complaining things aren't working?
did not complain about anything that is related to my DIY. I'm fully aware that I'm doing things on my own risk.
The error I reported happened BEFORE I did anything that is against the recommendations. That was properly not clear enough separated.
 
Last edited:

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
- K3S is stripped down K8S (simply put) so IX choose to interface with Docker instead of CRI-O
- Well, what IX means is just that it isn't prevented, not supported also means "not documented". But loading docker containers using the normal docker (or docker-compose) procedures should (tm) work out of the box
- for my home-server, I setup docker using the apps setup wizard (selecting a pool to store the data on), set iptables to true in /etc/docker/daemon.json and continued to use docker-compose instead
- actually you did meantion swarm. That being said: helm charts using helm should be fine. Or wait for the apps support to be expanded (which both me, IX and a few others are working our butts off for the Februari release)
- Helm should work semi-fine for those containers you run, I'm also taking requests currently for creating apps: https://github.com/truecharts/truecharts
Sneak peak of some of my and IX's current progress:
scale apps example 30-01.PNG
 

PackElend

Explorer
Joined
Sep 23, 2020
Messages
60
I'm also taking requests currently for creating apps:
I would like to suggest Firefly III - A free and open source personal finances manager (firefly-iii.org) as it uses Compose.
The third-party helper apps consist only of single Docker containers, so the existing tool can be used.

That gets me to a different question. Can single-container apps, created by the existing tool, added to the official library via PRs on GitHub?

There plenty of container deployment hardening guides and vulnerabilities checker around, some of them of good quality, e.g. Docker Security - OWASP Cheat Sheet Series.
Which recommendations are covered by the implementation in SCALE and which have still to be done by the user?
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
I would like to suggest Firefly III - A free and open source personal finances manager (firefly-iii.org) as it uses Compose.
The third-party helper apps consist only of single Docker containers, so the existing tool can be used.
I'm not taking requests on the forum.

That gets me to a different question. Can single-container apps, created by the existing tool, added to the official library via PRs on GitHub?
Those are multiple questions:
- Is there an existing tool te build apps?
No.
- Can Existing helm charts simply be added to a catalogue via PR?
No, they need to be modified into an app
- Can Existing app-ified helm charts be added to the official catalogue via PR?
Only if IX want to support those, community catalogues are there for community apps.

There plenty of container deployment hardening guides and vulnerabilities checker around, some of them of good quality, e.g. Docker Security - OWASP Cheat Sheet Series.
Which recommendations are covered by the implementation in SCALE and which have still to be done by the user?
- You can't simply use docker guides for K8S, Docker hardening != K8S hardening, two different workflows with different security issues
- Currently Hardening is not yet part of the product as it's an ALPHA product.
 

PackElend

Explorer
Joined
Sep 23, 2020
Messages
60
I'm not taking requests on the forum.
how else, Jira ticket

- Is there an existing tool te build apps?
I refer to
1612262315286.png


- You can't simply use docker guides for K8S, Docker hardening != K8S hardening, two different workflows with different security issues
I know that there is the same for Kubernetes, it just a generic question, if that kind of guides still need to be followed

- Currently Hardening is not yet part of the product as it's an ALPHA product.
didn't expect that but allow to ask these kinds of things in regard to the RELEASE product
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
there is only

but nothing about how to share them

How shall I

understand. As an invitation to ask for additional apps or is that only happening iX internally?
To be clear, this is my PERSONAL project. Which features a clear github issuetracker.


Okey, this is a tool to create k8s deployments using just a docker container, it does NOT create an SCALE APP.

I know that there is the same for Kubernetes, it just a generic question, if that kind of guides still need to be followed


didn't expect that but allow to ask these kinds of things in regard to the RELEASE product
It's barely in ALPHA state, no one knows which hardening steps do or do not need to be taken in the RELEASE product.
But additional CLI hardening would not be officially supported, thats clear.
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
Well, they are all still in active development.
I expect them all to be finished around the release of 21.02 ALPHA(?)/BETA.

If you want to actively help us develop these:
you can manually edit the following file in the NIGHTLY build:
nano /usr/lib/python3/dist-packages/middlewared/plugins/catalogs_linux/update.py

And change it to include the following:

CATALOGS = [
{
'label': OFFICIAL_LABEL,
'repository': 'https://github.com/truenas/charts.git',
'branch': 'master',
},
{
'label': 'TrueCharts',
'repository': 'https://github.com/truecharts/truecharts.git',
'branch': 'dev',
}
]

be aware: this is highly hacky and not production ready.
Most work is currently going into dealing with applications that need multiple ports and ingresses exposed.
 

stavros-k

Patron
Joined
Dec 26, 2020
Messages
231
Well, they are all still in active development.
I expect them all to be finished around the release of 21.02 ALPHA(?)/BETA.

If you want to actively help us develop these:
you can manually edit the following file in the NIGHTLY build:
nano /usr/lib/python3/dist-packages/middlewared/plugins/catalogs_linux/update.py

And change it to include the following:

CATALOGS = [
{
'label': OFFICIAL_LABEL,
'repository': 'https://github.com/truenas/charts.git',
'branch': 'master',
},
{
'label': 'TrueCharts',
'repository': 'https://github.com/truecharts/truecharts.git',
'branch': 'dev',
}
]

be aware: this is highly hacky and not production ready.
Most work is currently going into dealing with applications that need multiple ports and ingresses exposed.
On the new CLI command, how i can add the repository? By default clones master. But all your charts are on dev!
Thanks!
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
On the new CLI command, how i can add the repository? By default clones master. But all your charts are on dev!
Thanks!
All our charts are on master at launch.
I plan to push the first charts to master in a few days and the rest from the dev to master within a week or so.

But if you want to test (and be aware: Don't even think about using dev in production or at home, just for testing)
try this as an URL:
 

stavros-k

Patron
Joined
Dec 26, 2020
Messages
231
All our charts are on master at launch.
I plan to push the first charts to master in a few days and the rest from the dev to master within a week or so.

But if you want to test (and be aware: Don't even think about using dev in production or at home, just for testing)
try this as an URL:
I have a machine just for testing! I can't wait to switch from unraid to TN.
I tried this link but it won't complete.


Code:
[truenas]> app catalog create repository="https://github.com/truecharts/truecharts/tree/dev" label=TRUECHARTS
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/middlewared/plugins/catalogs_linux/utils.py", line 50, in pull_clone_repository
    repo = clone_repository(repository_uri, destination, depth)
  File "/usr/lib/python3/dist-packages/middlewared/plugins/catalogs_linux/utils.py", line 24, in clone_repository
    return git.Repo.clone_from(repository_uri, destination, env=os.environ.copy(), depth=depth)
  File "/usr/lib/python3/dist-packages/git/repo/base.py", line 1032, in clone_from
    return cls._clone(git, url, to_path, GitCmdObjectDB, progress, multi_options, **kwargs)
  File "/usr/lib/python3/dist-packages/git/repo/base.py", line 973, in _clone
    finalize_process(proc, stderr=stderr)
  File "/usr/lib/python3/dist-packages/git/util.py", line 329, in finalize_process
    proc.wait(**kwargs)
  File "/usr/lib/python3/dist-packages/git/cmd.py", line 408, in wait
    raise GitCommandError(self.args, status, errstr)
git.exc.GitCommandError: Cmd('git') failed due to: exit code(128)
  cmdline: git clone -v https://github.com/truecharts/truecharts/tree/dev /tmp/ix-applications/validate_catalogs/github_com_truecharts_truecharts_tree_dev_master
  stderr: 'Cloning into '/tmp/ix-applications/validate_catalogs/github_com_truecharts_truecharts_tree_dev_master'...
fatal: repository 'https://github.com/truecharts/truecharts/tree/dev/' not found
'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 138, in call_method
    result = await self.middleware._call(message['method'], serviceobj, methodobj, params, app=self,
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1220, in _call
    return await methodobj(*prepared_call.args)
  File "/usr/lib/python3/dist-packages/middlewared/service.py", line 496, in create
    rv = await self.middleware._call(
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1220, in _call
    return await methodobj(*prepared_call.args)
  File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 999, in nf
    return await f(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/middlewared/plugins/catalogs_linux/update.py", line 93, in do_create
    await self.middleware.call('catalog.update_git_repository', {**data, 'location': path}, True)
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1263, in call
    return await self._call(
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1231, in _call
    return await self.run_in_executor(prepared_call.executor, methodobj, *prepared_call.args)
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1135, in run_in_executor
    return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
  File "/usr/lib/python3/dist-packages/middlewared/utils/io_thread_pool_executor.py", line 25, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/usr/lib/python3/dist-packages/middlewared/plugins/catalogs_linux/sync_catalogs.py", line 41, in update_git_repository
    return pull_clone_repository(
  File "/usr/lib/python3/dist-packages/middlewared/plugins/catalogs_linux/utils.py", line 55, in pull_clone_repository
    raise CallError(msg)
middlewared.service_exception.CallError: [EFAULT] Failed to clone 'https://github.com/truecharts/truecharts/tree/dev' repository at '/tmp/ix-applications/validate_catalogs/github_com_truecharts_truecharts_tree_dev_master' destination: Cmd('git') failed due to: exit code(128)
  cmdline: git clone -v https://github.com/truecharts/truecharts/tree/dev /tmp/ix-applications/validate_catalogs/github_com_truecharts_truecharts_tree_dev_master
  stderr: 'Cloning into '/tmp/ix-applications/validate_catalogs/github_com_truecharts_truecharts_tree_dev_master'...
fatal: repository 'https://github.com/truecharts/truecharts/tree/dev/' not found


I think to git clone a different branch you need another flag..
 
Top