App deployment time expectations

ptyork

Dabbler
Joined
Jun 23, 2021
Messages
32
For me, I've never been able to tolerate the time it takes to create, deploy, stop, start, and update TNS Applications. Instead I'm just running docker compose inside of a Ubuntu VM and calling it a day. But given that things haven't seemed to improve since early betas, I was wondering if this was just me.

Should I be seeing it take anywhere from 30 seconds to 2 minutes to "round trip" (stop + start) a single application? Basically any application small or large? My point of comparison is with starting around 20 containers using compose in the Ubuntu VM (TNS hosted) which spin up in under 30 seconds...combined...with one or two downloaded updates from the hub.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,700
You're not comparing apples with apples there... no kubernetes in your VM, so no monitoring, etc.

It's your gear, do with it what you like, but stopping and starting apps isn't something that should be concerning if it takes a few seconds or a few minutes if the app is stable and runs for days or weeks thereafter.
 

ptyork

Dabbler
Joined
Jun 23, 2021
Messages
32
Though I get where you're coming from, the primary value of Kubernetes is scale out, not monitoring. One of the containers in my compose file is Portainer, which gives me roughly the same benefits in terms of monitoring (albeit external to the TNS UI). Obviously it's not apples-to-apples in terms of the tech stack, but is "granny smith"-to-"red delicious" in terms of the (relevant) functionality.

We can agree to disagree on the time value of money when it comes to deploying apps. When you're spending a lot of time plucking through a UI to try to tweak various settings and debug connectivity issues, permissions issues, etc., waiting 2+ minutes vs 10 seconds makes a huge difference. Compounded over 5-10 "try this...damn...try that instead" attempts across a large number of apps, it adds up quickly.

I think just overall the user experience is less than ideal. The UI is pretty and provides a lot of flexibility, but it is complex and cumbersome. Lack of an editable configuration file makes it difficult to share "recipes" and tweak settings (consider instead the VS Code model where a settings file is exposed in addition to the UI). Combine that with the speed of (re)deployment and the frustration level is higher than it needs to be.
 

truecharts

Guru
Joined
Aug 19, 2021
Messages
788
Though I get where you're coming from, the primary value of Kubernetes is scale out, not monitoring. One of the containers in my compose file is Portainer, which gives me roughly the same benefits in terms of monitoring (albeit external to the TNS UI). Obviously it's not apples-to-apples in terms of the tech stack, but is "granny smith"-to-"red delicious" in terms of the (relevant) functionality.

Comparing Docker-Compose without even their Scaling package with kubernetes, is wrong. Regardless if you manage it with Portainer or not.
First off: "stop" does not exist in kubernetes, it's a fake by iX-Systems, which scales the pods to 0 pods. While similair it's not actually a stop.

Docker, including it's scaling platform, have a way less sophisticated networking stack and monitoring system. Actually, one of the primairy reason to go for kubernetes when docker was still actively competing with them, was the 3 different stages of healthchecks and the improved monitoring options.

You are comparing a "quick-and-dirty home-user docker stack" with a "enterprise grade toolkit for deploying scaling containerised applications".
Two different products. Actually: The monitoring tool Portainer just started looking into the kubernetes crowd last year even though having requests for ages, it's a whole different beast/world.

We can agree to disagree on the time value of money when it comes to deploying apps. When you're spending a lot of time plucking through a UI to try to tweak various settings and debug connectivity issues, permissions issues, etc., waiting 2+ minutes vs 10 seconds makes a huge difference. Compounded over 5-10 "try this...damn...try that instead" attempts across a large number of apps, it adds up quickly.

I think just overall the user experience is less than ideal. The UI is pretty and provides a lot of flexibility, but it is complex and cumbersome. Lack of an editable configuration file makes it difficult to share "recipes" and tweak settings (consider instead the VS Code model where a settings file is exposed in addition to the UI). Combine that with the speed of (re)deployment and the frustration level is higher than it needs to be.

The redeployment after edit is about 30 seconds to 1 minute.
And this is pretty normal with Kubernetes across all platforms.

If this is a problem for you, than kubernetes as a whole might not be for you at all.
 

ptyork

Dabbler
Joined
Jun 23, 2021
Messages
32
If this is a problem for you, than kubernetes as a whole might not be for you at all.
On this, at least, we can partly agree.

First, thanks for answering my initial question. Yes, I should be expecting > 30s deploy time to be the norm. That makes me feel better about my systems. Note that I think some of the frustration may be with latency in the UI. I assume it must do periodic polling since I can see a 1/1 ready status well before it appears in the UI.

Anyway, I get why Kubernetes was selected for the SCALE product. But, I think much of the whining you get from me and others (hobbyists, IT for SMB, etc.) on here is that we were wanting the Linux-based TrueNAS, not so much the SCALE part. The allure of a single portal to manage a ZFS-based NAS with a KVM hypervisor (vs. Bhyve) and docker-based containers (vs. Jails) is high.

The frustration (at least speaking for myself) is that TNS is so close to the ideal unified solution. I think in large part it just needs time to mature. It's not as solid as TNC for NAS duties (I'm looking at you, weird NFS behaviors). It's not as robust as Proxmox for KVM-based virtualization. And its orchestration visualization and management is not as robust as Rancher or Portainer. Those I see as fixed given time to mature.

The one that just keeps surfacing on these forums is docker & docker-compose. Docker's there and can be used, but we're told not to use it because it could go away at any point. Rather we're told to run docker in a Linux VM. This introduces another layer of virtualization. Worse, it requires sharing of ZFS datasets using NFS (or SMB) because there's no way to pass these filesystems directly to a VM (e.g., VirtFS). Which is made even more cumbersome because sharing from Host to VM requires some networking backflips. Overhead on overhead. Kills the allure entirely for those of us needing or wanting this.

Concerns and requests regarding docker are always met with replies that amount to "you just don't know what's good for you...suck it up, Buttercup." And that's a shame.

I understand that docker may not always be the containerization runtime for TNS+K3S. But if iX were just to commit to ongoing support for it alongside CRI-O or whatever the Frakti is chosen (see what I did there?), I think they'd make a lot of folks happy. Surfacing it via the UI would be awesome, but simply not threatening to pull the runtime entirely would be sufficient.
 

truecharts

Guru
Joined
Aug 19, 2021
Messages
788
The one that just keeps surfacing on these forums is docker & docker-compose.

With the launch of our Docker-Compose App that is fully compatible with SCALE Apps as a whole, that problem is mostly solved already.
Docker's there and can be used, but we're told not to use it because it could go away at any point.

It's not just that, it requires essentially breaking some of the feature iX-Systems has build by-design.
Because it simply cannot be combined with using the docker-engine for kubernetes
Concerns and requests regarding docker are always met with replies that amount to "you just don't know what's good for you...suck it up, Buttercup." And that's a shame.

Never seen that specific reply anywhere.
Just the point being made that the product might not be for the person asking, but that's not the same as saying "you don't know what is good for you".

I understand that docker may not always be the containerization runtime for TNS+K3S. But if iX were just to commit to ongoing support for it alongside CRI-O or whatever the Frakti is chosen (see what I did there?), I think they'd make a lot of folks happy. Surfacing it via the UI would be awesome, but simply not threatening to pull the runtime entirely would be sufficient.

The thing is: iX-Systems never supported it. They never actually officially supported any kind of virtualisation or containerisation from the Shell.

SCALE's target is scale-out and hypercompute, it's target never was really home-users... though those can get the benefids, with some enterprise-level downsides as well.
 

ptyork

Dabbler
Joined
Jun 23, 2021
Messages
32
With the launch of our Docker-Compose App that is fully compatible with SCALE Apps as a whole, that problem is mostly solved already.
Well, that's interesting. I'd missed that announcement. That does solve the majority of these specific issues. Especially since it has direct access to /mnt and you can open the terminal and issue docker-compose commands directly. Clever workaround to the core issue. I'll have to check it out.

Though as was noted in a number of posts in the response to the announcement on Reddit that I just discovered, it would be great to see the option to disable K3S entirely. Recovering 10%+ of my CPU time and 7+GB of RAM would be nice, since it appears to consume that at Idle with no user-installed pods. Wondering if those coredns- and openebs-zfs- pods are system critical or just there to support (non-existent) apps?

The thing is: iX-Systems never supported it. They never actually officially supported any kind of virtualisation or containerisation from the Shell.
SCALE's target is scale-out and hypercompute, it's target never was really home-users... though those can get the benefids, with some enterprise-level downsides as well.
K3S seems an odd choice in this case versus other K8S implementations. I'll be interested to see what the UI and visualization look like once SCALE's scaling is really in place. And whether it can incorporate non-iX nodes or be a node in non-iX managed clusters. As is (especially without terminal support), I don't know if an enterprise will be using this to roll out any hyperscale, HA app infrastructure. But again, I see that as easily overcome if prioritized.
 

truecharts

Guru
Joined
Aug 19, 2021
Messages
788
Well, that's interesting. I'd missed that announcement. That does solve the majority of these specific issues. Especially since it has direct access to /mnt and you can open the terminal and issue docker-compose commands directly. Clever workaround to the core issue. I'll have to check it out.

You should, there is even a workaround to use the docker-compose command on the host to control it :-D
Though as was noted in a number of posts in the response to the announcement on Reddit that I just discovered, it would be great to see the option to disable K3S entirely. Recovering 10%+ of my CPU time and 7+GB of RAM would be nice, since it appears to consume that at Idle with no user-installed pods. ?

To be fair: k3s at idle should not use 7GB of ram, more like 2 as far as we're aware.
Wondering if those coredns- and openebs-zfs- pods are system critical or just there to support (non-existent) apps?

Well: If you don't install Apps, just don't initiate the Apps system (don't select a pool) that prevents k3s from starting.
If you are going to use the Apps system, those Pods are always required as they control critical k3s/app features.
K3S seems an odd choice in this case versus other K8S implementations.

That's a bit odd to say, considering you previously showed to lack enough understanding of kubernetes to evaluate the need of certain core-service-pods....

K3S is excelently suited for project like this, as it's more designed to be customised and embedded and lacks many of the integrated cloud-provider storage solutions (which are of no use to SCALE).
'll be interested to see what the UI and visualization look like once SCALE's scaling is really in place.

This is indeed mostly a WIP, considering the general attitude from both us and iX Has been that the interface needs work.
Expect it after BlueFin though at least not before the release after Bluefin.
whether it can incorporate non-iX nodes

That remains to be seen... theoretically non-ix compute nodes should be possible within the frame of current code development :)
or be a node in non-iX managed clusters.

This is not likely to be the case, as this was never any of the current or future development goals and would significant issues.
As is (especially without terminal support),

Helm and KubeCtl work pretty well out-of-the-box, as does the iX middleware.

I don't know if an enterprise will be using this to roll out any hyperscale, HA app infrastructure. But again, I see that as easily overcome if prioritized.
To be clear: SCALE does not target "hyperscale", that would be silly.
HA != HyperScale.

SCALE targets hyperconverged infrastructure, our estimate of the future target audience is about 1 rack of nodes in a single location with off-site backup as well.
 

ptyork

Dabbler
Joined
Jun 23, 2021
Messages
32
To be fair: k3s at idle should not use 7GB of ram, more like 2 as far as we're aware.
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1005580 root 20 0 7.1g 0.6g 0.1g S 14.5 1.0 151:29.80 k3s-server

That's with nothing but the core services running. Load balancer, GPU and auto updates are all off in the GUI. I was experimenting with multiple options yesterday---no difference really. But since it restarts every time you save options, those numbers represent a clean start that's never had any pools deployed. FWIW, I trust the memory numbers (7GB reserved, 600MB resident), though obviously the CPU numbers in top/htop are not representative of total CPU usage across 32-cores. Overall average is well south of 1%. But it is 10%+ of all CPU usage when the server is otherwise idle. Anyway, neither is a huge deal, just far more than I'm accustomed to for other solutions.

That's a bit odd to say, considering you previously showed to lack enough understanding of kubernetes to evaluate the need of certain core-service-pods....
K3S is excelently suited for project like this, as it's more designed to be customised and embedded and lacks many of the integrated cloud-provider storage solutions (which are of no use to SCALE).
I was basing that purely on Rancher's marketing...that it is intended for IoT, ARM and embedded & edge systems. As you have astutely noted, I am distinctly lacking in understanding. I'm nearly 50 and a Computer Science professor with a PhD, which likely means I know far less than the average 16 year-old hacker. :) I assumed it was a peer of MicroK8s, which I'd likewise not consider targeted at the enterprise.

What makes K3S "light" and others "heavy". K3S claims it is just like K8S, only lighter. So are there people that just want heavy for the sake of justifying hardware purchases? :) Surely it isn't just AWS/Azure/etc. integration? Is it fancy GUI and enterprise support? I need to find one of those handy feature comparison charts. Like this but for the big-boys. ;)

Helm and KubeCtl work pretty well out-of-the-box, as does the iX middleware.
My statement on command line was just in response to "They never actually officially supported any kind of virtualisation or containerisation from the Shell." I assumed that you meant iX wasn't going to support configuration and management outside of the GUI. If those tools are going to be officially "sanctioned" then obviously that aspect of my concern doesn't apply.

To be clear: SCALE does not target "hyperscale", that would be silly.
HA != HyperScale.
SCALE targets hyperconverged infrastructure, our estimate of the future target audience is about 1 rack of nodes in a single location with off-site backup as well.
Sorry. Bad buzzword usage. Far too much "high", "hyper" and "hybrid" in the world. I did mean HA (as in high-availability), though probably "highly scaled-out", "distributed" and "hybrid-cloud" were better terms for what I was trying to convey by the sloppy use of hyperscale. Cross-cluster sync and failover, management of a couple cloud-hosted nodes/clusters...that kind of thing. I was thinking about some of my mid-sized, decentralized clients (past life as a consultant) with, say, 5-10 field offices each with 50 employees as what I considered likely the target "enterprise" for TNS.

Sounds like you believe single location scale-out is the current sweet spot. Good to know. I appreciate you entertaining my curiosity and general interest in the product and it's options.
 
Top