Questions about kubernetes and updates

peter.m

Dabbler
Joined
Jan 1, 2021
Messages
41
Hi

I'm pretty happy about SCALE being released, and I've enjoyed putting it through it's paces in a VM as a test. I'm almost ready to move from Openmediavault to SCALE, but I have a couple of questios

#1 The developer notes states "SCALE allows Kubernetes to be disabled. The user will then have access to the native container services within Debian". Does this mean I can go ahead and install kubeadm like I would on a straight Debian OS?

#2 How exactly are updates handled? Is it all apt based? If I install additional software (non conflicting of course), can I expect for it not to be wiped? What about udev rules? I run on a QNAP TVS-682 and it needs a little massaging on the backend side to work smoothly

All of these boil down to a greater question - how much freedom on the base OS am I allowed? I've never used TrueNAS before, and OMV is just a nice UI on top of some salt and default configs that makes things smoother. I understand that SCALE is a bit more, and I welcome that, just wondering what the direction is here.
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
Welcome Peter, The answer is that you don't get that much freedom on the base OS. Apt-get packages will be wiped on upgrades.

You can do things via a boot script... these will then get run again on each reboot or upgrade.

Its a lot easier if you add applications/software as containers or VMs. if there's an application/software that needs to be installed in SCALE, then write it up and suggest a feature if you think many people would like it.
 

peter.m

Dabbler
Joined
Jan 1, 2021
Messages
41
Welcome Peter, The answer is that you don't get that much freedom on the base OS. Apt-get packages will be wiped on upgrades.

You can do things via a boot script... these will then get run again on each reboot or upgrade.

Its a lot easier if you add applications/software as containers or VMs. if there's an application/software that needs to be installed in SCALE, then write it up and suggest a feature if you think many people would like it.
Thanks for the reply. How will the quote from #1 work then? Is this something that will be implemented later on in development then?
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
There are container services native to debian (e.g LXC)..... without installing Kubernetes. SCALE includes Docker (Can't confirm on a holiday weekend).

if you need kubernetes, why not use the Kubernetes provided?
 

peter.m

Dabbler
Joined
Jan 1, 2021
Messages
41
There are container services native to debian (e.g LXC)..... without installing Kubernetes. SCALE includes Docker (Can't confirm on a holiday weekend).
Oh right sorry, I mean what is written after in the developer notes - "This will include Docker, LXC (Q1 2021) or any other Kubernetes distribution" I can see that SCALE includes apt repos specifically for kubeadm, so hoping there's something planned for that?


if you need kubernetes, why not use the Kubernetes provided?
Because the same developer notes states - "SCALE does not support workloads created manually with kubectl / helm or direct interaction with the Kubernetes API". Which I understand means I have to go through SCALE's Applications integration (and rewriting/porting my charts)
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
I talked to the engineering team and they agreed the notes are a little too protective. Better wording is below:

"SCALE does not officially test and support workloads created manually with kubectl / helm or direct interaction with the Kubernetes API. However, these APIs should function and SCALE software does not deliberately restrict them. Users should test and verify their own workloads. "

So using the APIs we provide is preferred and better tested, but we are open to users doing their own testing.
 

peter.m

Dabbler
Joined
Jan 1, 2021
Messages
41
I talked to the engineering team and they agreed the notes are a little too protective. Better wording is below:

"SCALE does not officially test and support workloads created manually with kubectl / helm or direct interaction with the Kubernetes API. However, these APIs should function and SCALE software does not deliberately restrict them. Users should test and verify their own workloads. "

So using the APIs we provide is preferred and better tested, but we are open to users doing their own testing.
Ok, that's cool. However I'm still more interested in the possibility of those different kubernetes distros, like kubeadm. Is this somewhere in the cards at some point?
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
I'd suggest that if you need a a custom kubernetes instance, that its better to use a VM. The VM will be fenced and protected from anything SCALE does during an upgrade. You would be wise to wait for a second opinion, but I just can't see how we can test SCALE software well enough to know what will happen to your Kubernetes during an upgrade.
 

peter.m

Dabbler
Joined
Jan 1, 2021
Messages
41
Well at that point what's the point of running SCALE? ;) I just imagined you would offer kubeadm as some kind of service inside of SCALE like you do now with k3s down the line, at least that's how I interpreted that line in your developer notes. My charts depend on hostpaths, and with a VM i would need workarounds, not to mention needless resource exhaustion since I'd need to carve out some cpu/ram dedicated to the k8s VM.

Don't take it the wrong way, I think SCALE is really nice, but it may not fit my needs. Not being able to do some advanced config on the OS side could be worked around (I mainly need special udev rules), but not having kubeadm is a no go for me
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
Hi Peter, We decided to start with K3s which will (not yet) provide the similar clustering as kubeadm and run the same applications.
You are welcome to try other things, but I wanted to warn you where you were off the map of known ski runs. Cheers morgan
 

peter.m

Dabbler
Joined
Jan 1, 2021
Messages
41
Ok, I've gotten as far as playing with included k3s. It looks like you're setting up k3s in a way that forces all services to be on the node IP. This is useless unfortunately, and baffling, since you could include metallb and allow for a range of IPs for LoadBalancer type IPs
 

peter.m

Dabbler
Joined
Jan 1, 2021
Messages
41
That's the CIDR for ClusterIP type services, what I'm talking about is LoadBalancer type, the one that allows you to use all ports, and even different IP addresses than the nodes (think macvlan from docker). That one will take on the NASs public IP the way k3s is configured. I get why this is done - it's much easier for the "applications" workflow to assume everything will be served like a NodePort from the nodes IP, just without the high ports limitation of "normal" k8s
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694

peter.m

Dabbler
Joined
Jan 1, 2021
Messages
41
I've read up on k3s, and this is beacuse they're bundling servicelb

K3s creates a controller that creates a Pod for the service load balancer, which is a Kubernetes object of kind Service.

For each service load balancer, a DaemonSet is created. The DaemonSet creates a pod with the svc prefix on each node.

The Service LB controller listens for other Kubernetes Services. After it finds a Service, it creates a proxy Pod for the service using a DaemonSet on all of the nodes. This Pod becomes a proxy to the other Service, so that for example, requests coming to port 8000 on a node could be routed to your workload on port 8888.

If the Service LB runs on a node that has an external IP, it uses the external IP.

This is what conflicts with MetalLB

Disabling the Service LB
To disable the embedded LB, run the server with the --disable servicelb option.

This is necessary if you wish to run a different LB, such as MetalLB.
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
So, if you can't find way of disabling, you'd like that option to disable in the Kubernetes setiings?
 

peter.m

Dabbler
Joined
Jan 1, 2021
Messages
41
Absolutely, if it's possible. Again, having the ability to somehow run kubeadm would be preferable, but from what I'm seeing disabling this additional component might be just good enough for now.

That said, I'm doubtfull your engineers might be open to the idea. Is there a way I could get on your slack and talk to them?
 

inman.turbo

Contributor
Joined
Aug 27, 2019
Messages
149
somehow run kubeadm would be preferable, but from what I'm seeing disabling this additional component might be just good enough for now.

Same here, I need this feature as well. metallb is absolutely critical for any on prem or colocation deployment of kubernetes.
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
If no-one responds with another solution, I'd suggest making it a feature request via the "report a bug" button at top of the page.

Please then publish the ticket number in this thread so we can track it.
 
Top