Container virtualization and the SCALE (RC-1) reality

Status
Not open for further replies.

jeyare

Dabbler
Joined
Nov 27, 2021
Messages
24
I forgot one more important thing. As part of the test described above, I also did a source container vulnerability test by my daily scanner tool.

scanner engine: anchore/grype ... source link
and here is outcome, used filters just for Critical and High and 2021 Vulnerabilities (to be clear):

NAMEINSTALLED FIXED-INVULNERABILITYSEVERITY
deep-extend0.4.2 0.5.1GHSA-hr2v-3952-633qCritical
json-schema0.2.3CVE-2021-3918Critical
ansi-regex4.1.0CVE-2021-3807High
ansi-regex3.0.0CVE-2021-3807High
through2.3.8CVE-2021-29940Critical

someone here wrote from Truecharts:​

All containers are scanned and results are public, as is usual for a Helm Repository.
 

jeyare

Dabbler
Joined
Nov 27, 2021
Messages
24
Btw, follow my experiences from real part of the world, it will be better to create a power user group with defined channel to achieve more flexible progress in this topic. No bulsh.ting, just fast forward.
This product has really great potential to make success in SME/SMB segment. When you will understand how the marriage of Docker swarm and K3S will help you to conquering this market, then you will make some tuning of the heading:
- docker for small fast implement mocroservices. Power user can help you how to harden.
- k3s for more demanding area.
I have tested both in current RC1-2 and it works great. OFC need to stop current progress and open thinking about the potential.

Synology fell asleep and two years of development pushed them away from this goal. Lot of power users are angry. Each smart moved all from native packages to containers - be independent. Proprietary Docker packg, no possible to run Swarm, … Kubernetes is just fairytale for them.
And the independency from the HW is a freedom.
be careful what repositories you drop into the APP if you are promoting it in conjunction with iX. Data security is something that should be paramount for the product like Scale. After all, it's not system primary targeted to a small box system in some living room.
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
AFAIK.. its not possible to run Docker and K3s/K8s on the same node. So, we have a design decision to make on whether we pursue one or both. @jeyare Are you suggesting to have the option to disable K3s and just allow native docker tools?
 

xinfli

Cadet
Joined
Nov 29, 2021
Messages
6
AFAIK.. its not possible to run Docker and K3s/K8s on the same node. So, we have a design decision to make on whether we pursue one or both. @jeyare Are you suggesting to have the option to disable K3s and just allow native docker tools?
If you decided to to keep current design, I'd like you give user an option to disable K3S and run native docker tools in our own risk, thanks!
 

NetCobra

Cadet
Joined
Dec 3, 2021
Messages
5
@Patrick M. Hausen
this thread is not about how to run containers on diff platform as the TrueNas Core. But understand your point.
This thread is about current stage of the containers operation/orchestration in RC-1 of the Scale and how to do it better to be useful for the target segment of the Scale product. And how to clearly communicate current stage to prospects who expected something based on reading from official iX web. It will be helpful for all the parties.


@NetCobra
my guide- how to run Docker swarm (mentioned above and works) in the Scale was targeted for just testing purpose of the Scale abilities. For a first touch and comparison with “comparable” solutions. You need setup Iptables chain and everything works as expected, include setup of bridge network, load balancing, include automated portainer agent services and ofc useful admin dashboard for all the operated containers (running, stopped, unhealthy) or for an excellent management of images or volumes. You can orchestrate both platform Docker swarm and k3s from single point = Portainer. Tested, works. From the Portainer you can setup Helm charts repository directly (Bitnami, …) for more flexible deployment = what is more clear source as existing solution in the Apps. For me it is enough for the tests. Need to wait for the next Scale stages. I think gents from iX should to try it for an inspiration. Ready to help them, to find more useful final stage.
@jeyare Thank you, I will try it :smile:

Thank you to push the devlopment of TNS, hope it will get better.
 

NetCobra

Cadet
Joined
Dec 3, 2021
Messages
5
@NetCobra Run TrueNAS CORE if you want TrueNAS for other reasons (stability, ZFS, snapshots, replications ...), deploy a Debian VM, run docker/whatever inside that VM.

I am also experimenting and watching where this will lead, but for production - even if private/SOHO - use, SCALE is not yet ready in my opinion. OTOH iX never claimed it was. This is a public beta of an entirely new product.
Thanks, but I don't think it's a good solution to start a Debian VM in TNS just to run docker inside it, it's totally unnecessary complexity.
 

jeyare

Dabbler
Joined
Nov 27, 2021
Messages
24
AFAIK.. its not possible to run Docker and K3s/K8s on the same node. So, we have a design decision to make on whether we pursue one or both. @jeyare Are you suggesting to have the option to disable K3s and just allow native docker tools?
Save a time, prepare a Zoom session and I will show you my test Scale lab :cool:
 

jeyare

Dabbler
Joined
Nov 27, 2021
Messages
24
I have second test almost done:
1. installed Portainer as Helm Chart in TNS Node
2. include patching of of the NodeSelector
3. Running, but Portainer Pod is still in pending stage
Found a reason, the patch doesn’t work well, because:
Node-Selectors: kubernetes.io/hostname=
is empty when kubectl describe pods -A.
but its a weekend and I need spend a time with diff entertainment
Be back soon.

Finally: Portainer Agent doing job as expected in my first (described) scenario. And I can manage both (swarm and k3s).
 

HarryMuscle

Contributor
Joined
Nov 15, 2021
Messages
161
I have second test almost done:
1. installed Portainer as Helm Chart in TNS Node
2. include patching of of the NodeSelector
3. Running, but Portainer Pod is still in pending stage
Found a reason, the patch doesn’t work well, because:
Node-Selectors: kubernetes.io/hostname=
is empty when kubectl describe pods -A.
but its a weekend and I need spend a time with diff entertainment
Be back soon.

Finally: Portainer Agent doing job as expected in my first (described) scenario. And I can manage both (swarm and k3s).
Could you share any information on the command you ran to install a helm chart on TNS (as you mention in point 1)?

Thanks,
Harry
 

truecharts

Guru
Joined
Aug 19, 2021
Messages
788
Unlike haters, I also offer recommendations on how to improve it.
To be quite frank, most said here does not differ much from the other 3 or 4 threads with people explaining they want direct docker/kubernetes access.

But compliments where compliments are due, your feedback on the container overview is spot-on :)
Adding to that, the overview isn't the only problem with the container subsystem in SCALE:
- SCALE does not automtically prune containers
- Containers create ZFS datasets and snapshots which slow down the system a lot due to the lack of pruning.


I forgot one more important thing. As part of the test described above, I also did a source container vulnerability test by my daily scanner tool.

We don't build (most of) the containers, we do have some plans for a future where we take a more "active" stance on good containers, but currently we don't actively build most of the containers ourselves. We do aim to use industry standard sources, where possible (like LinuxServer.io or Bitnami). In cases where we do not, we are very open to switching containers to ones made by people with a more thorough focus on security.

Simply put: If you would see a case where you would replace the container with another because that one is more secure, please do send us a headsup. It's fine using the security contact information for that as well.

We never said all Apps we provide are secure, nor advice people to fully trust our judgement. We've always been quite open about App security, our view on security and our opinion that most containers available are complete and utter garbage (from a security standpoint).

Hence we try to enable as many kubernetes security precautions as we can by default. This also means we are painfully aware how many of the containers we use do not comply to our ideal standards.

We are actually one of the few Helm-Chart repositories out there, that takes such a clear proactive stance on security.


someone here wrote from Truecharts:​


We do agree our security information was splintered and not readily (enough) available. That feedback was somewhat valid, but we just definately do not agree about the premise that our project has mediocre security practices.

As you do not, seem to, actively follow our project, it might be relevant to note that we just spend a complete weekend(!) and about 36+ manhours to build a customised security scanning stack to add frequently updated security scans of both(!) the helm charts and the containers, directly to our documentation (instead of external resources). We also went ahead and added a list of containers there as well.

On top of that, we included some changes to allow us to, more easily, either switch to our own docker containers or actively patch existing docker containers. Which also adds precautions against docker-ratelimits and ghcr downtime, rework should be done in about a day.

We're a young project and polishing things simply takes time.
That's also why we do not feel our project will be SME/SMB ready for the first release of SCALE either. nor is SCALE, as also stated by @Patrick M. Hausen . In the current state of TrueCharts, even SOHO use of TrueCharts is not adviced (outside of testing).

We expect a more "RC" state of TrueCharts in about a week or two.
 

aussiejuggalo

Explorer
Joined
Apr 26, 2016
Messages
50
Are you suggesting to have the option to disable K3s and just allow native docker tools?

Thought I'd chime in here and say from a home users perspective that would make life much easier if we could access Docker directly and use Portainer instead of Kubernetes, it'd make Scale pretty much the go to for home use.

I know this is designed for Enterprise but because it's free and has a rock solid ZFS implementation a hell of a lot of home users use FreeNAS & TrueNAS Core. The hype around Scale has been massive because of it being built off Linux and running Docker, opens up a lot of possibilities for a lot of us. Kubernetes seems to be a major stumbling point for people though even with the TrueCharts catalog which does give us a little more control over containers and helps make things a little easier, because we either don't know how to use Kubernets and now have to learn it or cant understand how it's been implemented in Scale it makes things harder.

If we could have a properly supported option by IX to disable K3 and just access Docker without doing janky hacks that'd be great, even better would be an option for K3 or native Docker with Portainer :grin:. If I'm understanding what I've been reading though this was never the plan and K3 is pretty heavily coded into Scale so implementing something to disable it could be a massive pain in the ass and future updates to systems with K3 disabled could also be a hassle and cause problems. But if it was possible it'd be a good option to give users in general.

Me personally I use my NAS pretty much only for Plex so I can run either Core or Scale. I'm probably going to go Core because I need to redo my pool set up pretty soon so I'm just going to start from scratch again and use danb35's Plex Jail script, never liked the premade Plugins (sorry) and then run Alpine in a VM to do my Docker / Portainer stuff. But honestly I'd rather run Scale with native Docker / Portainer and forgo the VM because it'll make things simpler and put less unnecessary stress on the system. HW transcoding would be much easier in Scale as well because of it being Linux but I personally cant deal with Kubernetes and the containers, it's to restrictive and just gives me headaches.
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
Me personally I use my NAS pretty much only for Plex so I can run either Core or Scale. I'm probably going to go Core because I need to redo my pool set up pretty soon so I'm just going to start from scratch again and use danb35's Plex Jail script, never liked the premade Plugins (sorry) and then run Alpine in a VM to do my Docker / Portainer stuff. But honestly I'd rather run Scale with native Docker / Portainer and forgo the VM because it'll make things simpler and put less unnecessary stress on the system. HW transcoding would be much easier in Scale as well because of it being Linux but I personally cant deal with Kubernetes and the containers, it's to restrictive and just gives me headaches.

We'd be keen to understand any difficulties with just the Plex app, if that is what you need. The primary goal of BETA and RC releases is to iron out the issues with current versions, before we get to whole new features on the next major version.
 

jeyare

Dabbler
Joined
Nov 27, 2021
Messages
24
Gents, after the morning coffee session I have done my last target within SCALE RC:

Direct connection to the k3s master node with Portainer with the full control.

What it is meaning:
- full and comfortable control of all k3s aspects
- you can definitely forget for the APPs GUI what is in the current stage out of the useful range
- don't need to spend time to create a tune of the GUI - it is a wasting of a time when there is something excellent.

How to do it:
1. Use YAML manifest deployment & Expose via Node port
2. You can download the original YAML file from the official source:
https://raw.githubusercontent.com/portainer/k8s/master/deploy/manifests/portainer/portainer.yaml
3. You need to get the name of the master node from your Scale RC:
Code:
k3s kubectl get nodes --show-labels

what is 'ix-truenas' in the default setup
4. Edit the YAML file (# represents a line number in the YAML file):
#115 nodeSelector:
#116 {}
to new:
#115 nodeSelector: kubernetes.io/hostname: <node label from the step n. 3>
#116 {} <-------- this line you can delete
save the file e.g. /temp/portainer.yaml
5. Deployment:
Code:
k3s kubectl apply -f /tmp/portainer.yaml

you will get:
namespace/portainer created
serviceaccount/portainer-sa-clusteradmin created
persistentvolumeclaim/portainer created
clusterrolebinding.rbac.authorization.k8s.io/portainer created
service/portainer created
and this error:
error: error parsing /tmp/portainer.yaml: error converting YAML to JSON: yaml: line 26: mapping values are not allowed in this context
I don't know why ... because the line number content was untouched by me (a new ticket for iX). Because this line content is about:
volume.alpha.kubernetes.io/storage-class: "generic"
I got stuck here for a while. So time warp to the next step (an explanation in the bottom line of this post):
just continue with Portainer official deployment link (doesn't contain the master node name):
Code:
k3s kubectl apply -n portainer -f https://raw.githubusercontent.com/portainer/k8s/master/deploy/manifests/portainer/portainer.yaml

you will get:
namespace/portainer unchanged
serviceaccount/portainer-sa-clusteradmin unchanged
persistentvolumeclaim/portainer unchanged
clusterrolebinding.rbac.authorization.k8s.io/portainer unchanged
service/portainer unchanged
deployment.apps/portainer created
here is the magic (last row):
deployment.apps/portainer created
because when you use next command for a proof:
Code:
k3s kubectl get pods --all-namespaces

you will get:
NAMESPACE NAME READY STATUS RESTARTS AGE
portainer portainer-dcd599f8f-6gkl6 1/1 Running 3 1m
the list is filtered, I have several pods there already.

6. Open your browser and use:
http://ip:port
where IP is:
TrueNAS Scale host IP exposed to the LAN
or FQDN follow your setup ( I have Nginx Reverse Proxy in my existing infra)
where port is:
30779 - for https
30777 - for http
when you have RP as me, there is another way, without port number
define your admin usr/psw and ENJOY!

-------------------------------------------------------------
Bottom line
There was one of the tests scenarios when I tried to find a solution to run Portainer in the Scale POD:
When I tried only the original YAML from Portainer, I got an Portainer POD, but in the PENDING (scheduling) stage. I found a reason:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 93m default-scheduler 0/1 nodes are available: 1 node(s) didn't match Pod's node affinity/selector.
and the NodeSelector for the 'kubernetes.io/hostname=' contained an empty (value) ... ofc people from the Portainer don't know the value, what is necessary to use within the YAML.
OFC, in the Portainer deployment documentation you can find a patch for it:
Code:
kubectl patch deployments -n portainer portainer -p '{"spec": {"template": {"spec": {"nodeSelector": {"kubernetes.io/hostname": "'$(kubectl get pods -n portainer -o jsonpath='{ ..nodeName }')'"}}}}}' || (echo Failed to identify current node of portainer pod; exit 1)

what doesn't work even when you correctly tune the script in both 'kubectl' commands used to correct 'k3s kubectl'... I would like to know the reason for the 'k3s kubectl' instead of the standard used as 'kubectl' command convention in the SCALE.

The Portainer POD wasn't still available.
Hm.
Checked a taint to tell the node that it's allowed to run pods :
Code:
kubectl taint nodes --all node-role.kubernetes.io/master-

I got her an error:
error: taint "node-role.kubernetes.io/master" not found
So I checked taints:
Code:
k3s kubectl describe node ix-truenas | grep Taints

Taints: <none>
The reason, why was tested this scenario is based on official Kubernetes documentation:
By default, your cluster will not schedule Pods on the control-plane node for security reasons. If you want to be able to schedule Pods on the control-plane node, for example for a single-machine Kubernetes cluster for development, run:
Code:
kubectl taint nodes --all node-role.kubernetes.io/master-
Don't do it.

Some screenshots:


1638787842186.png


1638787992997.png

1638788040338.png


1638788532937.png



take it just a taster.

So my final stage is now:

I have one new host for containers operation - TrueNAS Scale RC-1-2:
1. I can run there fully managed Docker Swarm with all the added value from it. Thx to the Portainer CE hosted from another host (also from the Truenas when I need).
2. Fully managed k3s node thx to the Portainer CE running in the SCALE NODE.
I can deploy any docker container from Docker hub, then I have the FULL CONTROL about it and no one will put me unclear container sources.
I can deploy any charts, then I have FULL CONTROL about it and no one will put me unclear chart sources with prerdefined usr/psw, ... .

Why this is not defined as target architecture of the SCALE? As kind of professional solution? It is works.
 

jeyare

Dabbler
Joined
Nov 27, 2021
Messages
24
But compliments where compliments are due, your feedback on the container overview is spot-on :)
Adding to that, the overview isn't the only problem with the container subsystem in SCALE:
- SCALE does not automtically prune containers
Solved by Portainer (see above) already

As you do not, seem to, actively follow our project, it might be relevant to note that we just spend a complete weekend(!) and about 36+ manhours to build a customised security scanning stack to add frequently updated security scans of both(!) the helm charts and the containers, directly to our documentation (instead of external resources).
Understand. But the value of the spent manhours is not equivalent to the quality in my world, because finally = you have critical/high vulnerable containers in your repo. Then something is wrong with the attitude.

Again, don't take it personally. Be a pro-grade.
Providing such content is dangerous for the following reasons:
- SoHo users mostly use such containers
- often, such users operate their environment for purposes other than SoHo, for example, to connect to the corporate environment (even more during COVID times)
- vulnerable apps help the darknet penetrate just into such "little guarded" environments so that they can get to the more interesting target - corporate.

Spreading vulnerable containers is just helping to bring about such events. At the same time, it would be enough to deploy the environment for automated testing of containers, which I mentioned above. It takes max. one hour for me to test all of your current repo (links is above).
Just a sufficient amount of responsibility is needed - not to leave out containers that have critical / vulnerabilities for the reason described above.
Never provide such containers in the repo. Just use the scan report attached to the chart. And use a list of forbidden containers due to vulnerability discovered. Don't waste a time putting them to the repo.
The excuses are useless. You need to act responsibly.

When not, you will support the mess with ransomware and similar events. Think about it.

That's also why we do not feel our project will be SME/SMB ready for the first release of SCALE either. nor is SCALE, as also stated by @Patrick M. Hausen . In the current state of TrueCharts, even SOHO use of TrueCharts is not adviced (outside of testing).
no comment
I showed that it works easily, follow my attitude. I spent a few days with that. Not full time.

As you can see from my guide they can use docker on TrueNAS Scale more comfortably than from TrueNAS Core. Using docker in the Core is useless and complicated - create VM with Ubuntu, then install docker there, then install containers there. OMG, here is 2021. Then I can't take @Patrick M. Hausen point also. Even for SoHo users.
 

parallax

Cadet
Joined
Dec 5, 2021
Messages
2
Joined to say I was a new TrueNAS user as of yesterday and within hours was deeply frustrated with the gap between the expectation set and the reality exactly as Jayare is. As a long time product manager I can definitely understand (a) setting an aspiration for a product which may be some way off from the current release and (b) rough edges in betas and release candidates, but even so I'm extremely disappointed at this point.

I was planning for TrueNAS to simplify the home-facing portions of my home lab - that is, storage of video and files, all the media acquisition and organising, backups, sync to cloud, home automation and the like. This comprised about 20 containers in Docker running in an LXC which also did SMB and NFS, running in Proxmox in my lab. So obviously moving to TrueNAS Scale - which offered the dream of 90+% of the functionality I was maintaining manually in a single friendly package better tuned to running at home - was very attractive.

Apart from some installation struggles - just one example is basic things like the documentation on how to set the IP address alias in the initial setup doesn't say you need to specify the subnet mask, although the example below does so, and without specifying it defaults to /30 and you can't connect to the GUI - it was great to get it up and running after an hour or so, and importing my existing ZFS disks was super easy. But then the misery of trying to get my Docker containers restored consumed much of my Sunday afternoon and evening, when I definitely had better things to do. So at 11pm I blew away the whole TrueNAS environment to install vanilla Debian just to get back to a working state for home.

I understand k8s is sexy and exciting and new(ish). I get that you want it on your CVs. I run a 5 server cluster in my home lab because I need it for work. But if your target market is primarily SME users (and even if it isn't us advanced home users since we don't pay you money), then surely running K8s on a file server/NAS has zero benefit, or indeed a negative benefit, for the vast majority of your customers. The benefit of k8s is around scaling, availability, and running sophisticated environments, but surely TrueNAS is not going to be deployed in a cluster of servers in your target market like Proxmox or ESXi, and the majority of apps that make sense in a SME file serving environment (Nextcloud, say) do not use or benefit from k8s capabilities. Some of them are downright problematic because of it, like if I wanted to run a DHCP server in Pihole/Adguard/etc, or if I need inter-Pod communication to "just work," or if I want to run say Rancher to manage a larger environment. You have made something which is too restrictive to easily use for basic tasks, and yet also too difficult to run anything even moderately sophisticated. Similarly I see you (IX) place a high value on Plex working, even though it is definitely not something 99% of your SME customers want or need. Let's call Plex on k8s hack-y at best, especially compared to Docker, it again really wants host networking (especially if, like me, you are behind CGNAT), storage affinity, and it doesn't really work to put your media in the typical k8s PV-claim style structure.

Even leaving aside the technical issues I guess I just don't understand who you think your target customer is or how what you're building fits their needs. Even for me (and I wouldn't call myself an expert by any means but I do run a reasonable amount of k8s workloads at home, which is probably more than the majority of people) I find getting Scale running outside of the core file serving functionality much, much harder than Docker, particularly vs Portainer, and there's just no comparison with something like Rancher on k8s. Are you aiming to do even as good a job than Portainer or Rancher at managing containers? Are you making a more user-friendly Harvester (also in beta and already much easier to use)? Are you going to do as well as Proxmox or VMware (love 'em or hate 'em) or Harvester at VM management? Why get into curating 3rd party apps, this is a nightmare? I get you want to do more than Napp-IT, but what? And why?

So in sum what is TrueNAS's plan here? The core file serving capability is excellent, nicely presented, and very usable. You already use third party projects as a part of TrueNAS, why not embrace that and just leverage Portainer for Docker (and potentially abandon hosting VMs; realistically is this really a high priority for TrueNAS customers outside the enthusiast community)? Otherwise in terms of usability you're make TrueNAS Scale a prettier but functionally worse OMV.
 

truecharts

Guru
Joined
Aug 19, 2021
Messages
788
Understand. But the value of the spent manhours is not equivalent to the quality in my world, because finally = you have critical/high vulnerable containers in your repo. Then something is wrong with the attitude.

The existence of CVE does not equal an attack-vector. It depends on the usecase.

That being said: We do agree that there should not be high or critical vulnerabilities in the containers and we do have a long-term roadmap to deal with that where possible. Projects simply take time to mature.

The thing is: You still don't seem to understand what our project is about:
Building a GUI for Applications

In the beginning we've actually started as a curated catalog and due to community request we've let go of curation. We explicitly have users request us that they want to decide for themselves to use depricated apps or apps with bad containers.

It's the users responsibility to decide what they want to install using our GUI's. Not ours.

We agree we've a long path to walk as a project. We're very open about all risks associated with that fact and that's all we can reasonbly do.
Everyone is free to start their own respository with curated garanteed-to-be-secure Apps.

Lets rephrase our previous statement in that regard:
We take security serieusly for OUR PART of the installation process.
 

truecharts

Guru
Joined
Aug 19, 2021
Messages
788
apps that make sense in a SME file serving environment (Nextcloud, say) do not use or benefit from k8s capabilities.

Actually, Nextcloud is one of the examples where using Helm and K8S, combined with iX's easy-rollback really shines. It actually uses a relatively large amount of k8s native features, at least ours does.

Some of them are downright problematic because of it, like if I wanted to run a DHCP server in Pihole/Adguard/etc,

In that case, it's one checkbox: Host-Networking.
It's tricky, but this specific usecase is one of those cases where it ís actually needed to be used.

if I need inter-Pod communication to "just work,"

Our inter-pod communication has "just worked" for about 6 months now. We even offer a generator to auto-generate the names of you've trouble with that

You have made something which is too restrictive to easily use for basic tasks, and yet also too difficult to run anything even moderately sophisticated.

Maybe because the first release in februari isn't done yet?
A large amount of the feature set is delayed for the second release of SCALE. We even had to push hard to even get something as simple as a loadbalancer disable toggle included.
Similarly I see you (IX) place a high value on Plex working, even though it is definitely not something 99% of your SME customers want or need.

Plex is actually a relatively easy App to deploy, hence it was trivial to add and many users want to use it. That's enough of a reason to add it ofc. "easy PoC"
it again really wants host networking (especially if, like me, you are behind CGNAT)

It does not need hostNetworking, except in a few niche cases like some dvr setups. It's definately not needed for CGNAT. Plex works fine behind multiple layers of NAT, one more or less doesn't change that much.

storage affinity, and it doesn't really work to put your media in the typical k8s PV-claim style structure.

hostPath works fine though
Why get into curating 3rd party apps, this is a nightmare?

It's not curated, it's an open system. based on Helm-Charts
Just like VMWare's Bitnami-KubeApps


So in sum what is TrueNAS's plan here?

As @Patrick M. Hausen Already explained: It's simply not done yet with the first release. Not for SME and not really for SOHO either.

First release is a "Stable Technical Preview". Large systems are not even close to be done yet:
- Backup and Restore of Apps in the GUI
- k8s Clustering
- Using Clustered storage within kubernetes
- Text fields in the GUI
- YAML Text fields in the GUI
- Build updating (if feasable)
- Automatic Updating (if feasable)
(and that does not include all the potential QoL issues that need to be looked at, once done)

A good comparison for the future GUI would be KubeApps by Bitnami, which is the same + one or two extra features.

Comparing a product build in 1,5 years(!) by a small team with big players like Rancher and Portainer and expecting the same level of functionality in the first release is completely unreasonable.

That's the problem with a certain group of users here:
"I want it all, I want it now and I want it my way"
 
Status
Not open for further replies.
Top