Containers: Minimum node port 9000?

Norumen

Dabbler
Joined
Dec 23, 2020
Messages
18
Hi.
I saw TrueNAS Scale as a way of migrating all my Docker containers over from a Ubuntu server, and at the same time have a great storage server (now running 7x2 TB pool).

But are now testing containers, and see that I cant add ports lower than 9000.

Are running multiple containers with ports below 9000, one example is the Unifi controller which needs multiple ports below 9000. I am no Docker specialist, so there are maybe a solution to get around this, but I can't see it.

If I can't run my containers here, I need to scratch my new TrueNAS server and go over to UnRAID or something :(
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,398
According to the Jira ticket, and https://oteemo.com/2017/12/12/think-nodeport-kubernetes/, this isn't a limitation with Scale, but with upstream Kubernetes k8s. Since the Scale applications deploy via Helm charts, they inherit the upstream k8s limits. However, I don't see anything preventing you from running a container via Docker outside k8s.
 

Norumen

Dabbler
Joined
Dec 23, 2020
Messages
18
Can I do this and administrate the Docker containers via web-GUI? Are using commands to create them today in Ubuntu, but want a GUI to manage them.

Do I need to do something to run docker run commands?
 

peter.m

Dabbler
Joined
Jan 1, 2021
Messages
41
Hi.
I saw TrueNAS Scale as a way of migrating all my Docker containers over from a Ubuntu server, and at the same time have a great storage server (now running 7x2 TB pool).

But are now testing containers, and see that I cant add ports lower than 9000.

Are running multiple containers with ports below 9000, one example is the Unifi controller which needs multiple ports below 9000. I am no Docker specialist, so there are maybe a solution to get around this, but I can't see it.

If I can't run my containers here, I need to scratch my new TrueNAS server and go over to UnRAID or something :(

This is fine. With the way k3s is set up now, there will an accompanying service to each deployment, which will in turn spawn a forwarder pod. It will be listening on the host network with the port set up in the service and forward packets from that port to the the container. So it will work in the end :)

(The component is called servicelb and is part of k3s. You can look up the docs)
 

Norumen

Dabbler
Joined
Dec 23, 2020
Messages
18
This is fine. With the way k3s is set up now, there will an accompanying service to each deployment, which will in turn spawn a forwarder pod. It will be listening on the host network with the port set up in the service and forward packets from that port to the the container. So it will work in the end :)

(The component is called servicelb and is part of k3s. You can look up the docs)

As a person that isn't very fluent in Kubernetes, k8s, k3s, docker, network etc, I didn't understand very much of your answer. Very glad you are trying to help :)

Is this another service I need to setup, or where do I setup the forwarder? Please use this example, need port 8443 for my unifi container.
 

Kieeps

Dabbler
Joined
Jun 17, 2018
Messages
30
As a person that isn't very fluent in Kubernetes, k8s, k3s, docker, network etc, I didn't understand very much of your answer. Very glad you are trying to help :)

Is this another service I need to setup, or where do I setup the forwarder? Please use this example, need port 8443 for my unifi container.

If you set the host port to something like 9001 and the nodeport to 8443 the 9001 will act as the 8443 so that you can use http://unifi-ip:9001 instead... But there are other ports aswell tjat i dont really know how unifi will handle if changed to the 9000+ range... I know AP use some kind of announce port to connect to the controller. Give it a try, pick some ports above 9000 and bind them to the ports unifi needs.

I dont really use unifi gear anymorenso can't help you test it :-(
 

shadofall

Contributor
Joined
Jun 2, 2020
Messages
100
If you use host networking you can access the port directly as well. ie port is 5000 you can access that port with the server ip:5000. But if you have multiple containers using the same port then using host networking would cause. conflict. the node port is for containers using the internal kube network. and using serverIp:nodeport would redirect you the container network ip:containerport At least I think that's how it's supposed to work. The current ui is very much a work in progress. And I think if I recall right now it requires the node port even if you select host networking, I'm decent with Docker but still trying to wrap my head around Kube.
 
Last edited:

peter.m

Dabbler
Joined
Jan 1, 2021
Messages
41
As a person that isn't very fluent in Kubernetes, k8s, k3s, docker, network etc, I didn't understand very much of your answer. Very glad you are trying to help :)

Is this another service I need to setup, or where do I setup the forwarder? Please use this example, need port 8443 for my unifi container.
In kubernetes you abstract the container port from the network-facing port, as you may have more than one copy of a container on a given node, and so you define a kubernetes service in between which is like a loadbalancer. so your final network path looks like network -> service -> container[random_nodeport]. servicelb handles assinging lower port numbers to services, and does this automatically when it detects a Loadbalancer type service inside of kubernetes.

However, for unifi, in your case you should use host_network and not define any container ports. For its specific case it gets messy due to how kubernetes handles services (it can be done, since i've written the unifi helm chart myself, but you need kludges thatI think are unavailable in k3s)
 

Norumen

Dabbler
Joined
Dec 23, 2020
Messages
18
In kubernetes you abstract the container port from the network-facing port, as you may have more than one copy of a container on a given node, and so you define a kubernetes service in between which is like a loadbalancer. so your final network path looks like network -> service -> container[random_nodeport]. servicelb handles assinging lower port numbers to services, and does this automatically when it detects a Loadbalancer type service inside of kubernetes.

However, for unifi, in your case you should use host_network and not define any container ports. For its specific case it gets messy due to how kubernetes handles services (it can be done, since i've written the unifi helm chart myself, but you need kludges thatI think are unavailable in k3s)

I can't seem to create a container without setting a node port. Are using host network, and will not have so many containers that the ports will overlap.

But what you say is that it is the container port that matters and that I don't need to think about the node port?
 

peter.m

Dabbler
Joined
Jan 1, 2021
Messages
41
I can't seem to create a container without setting a node port. Are using host network, and will not have so many containers that the ports will overlap.

But what you say is that it is the container port that matters and that I don't need to think about the node port?

Nodeports are part of services, not containers. Think about it - you have 3 copies of a http server (this is whay kubernetes was made for - to scale containers), and each of them would have the same nodeport? That wouldn't work. So yes, what you want are the ports inside the container (pod in k8s nomenclature) itself. I quickly googled for some docs, and this was the most promising tutorial that I could find quickly that could explain it https://alesnosek.com/blog/2017/02/14/accessing-kubernetes-pods-from-outside-of-the-cluster/. So for unifi, you'd set the container to network_mode: host and you're done, but now you manually have to watch out for port collisions.

This is usually heavily discouraged in k8s land, but for your specific usecase it would be kinda okayish since truenas disallows using a different LB implementation as of now
 
Last edited:

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
having an ingress solves a lot of these issues, but neatly integrating an ingress into the SCALE apps GUI takes some carefull work.

Mostly because having a good ingress abstracts outgoing ports away from the running containers a bit.

I expect a solid implementation of Treafik as ingress to be available next release, including a lot of apps.

But actuall install GUI based on the APP (instead of the current hardcoded alpha) has just been merged slightly more than a week ago.

But:
Nodeports are indeed not supposed to go <9000, actually using nodePorts is NOT k8s best practice at all.
IX implemented the current apps with just nodePort support, because its just very easy. Like I saud before: Solid ingress and Solid apps take time...
 

peter.m

Dabbler
Joined
Jan 1, 2021
Messages
41
having an ingress solves a lot of these issues, but neatly integrating an ingress into the SCALE apps GUI takes some carefull work.

Mostly because having a good ingress abstracts outgoing ports away from the running containers a bit.

I expect a solid implementation of Treafik as ingress to be available next release, including a lot of apps.

But actuall install GUI based on the APP (instead of the current hardcoded alpha) has just been merged slightly more than a week ago.

But:
Nodeports are indeed not supposed to go <9000, actually using nodePorts is NOT k8s best practice at all.
IX implemented the current apps with just nodePort support, because its just very easy. Like I saud before: Solid ingress and Solid apps take time...

Yes, having an ingress solves 98% of the apps, but it's not the case for unifi, as it requires specifically 8080, 8443 and 3478/udp for stun. Not only that, but it needs to know it's real routable address to communicate it to end devices properly. It's a real pain to set up in k8s properly, moreso than under vanilla docker where one can just throw it's networking under an macvlan driver and be done with it
 

shadofall

Contributor
Joined
Jun 2, 2020
Messages
100
Host networking should take care of you, baring any port conflicts, i just think right now (i havent tested in recently nightlies) the UI is just requiring the container/node ports be filled in. if i recall my reading right by default K8s (or k3s or both) actually normally restrict nodeports to the 30k-32kish range, IX actually extended that out to the 9000+

what i'm not clear on my self and maybe @ornias will shed some light on. is if the add external interface needs the host networking box checked or not. and if its just plumbing a virtual IP on top of the physical interfaces. if so then attaching the container to the network port with its own static IP would/should work. and would avoid any conflicts

kube offers alot options yes. but i think I'm starting to understand it. not saying i'm ready to write a helm chart or anything but i am seeing some the benefits, and I do say i like the way things work a little better than dockers -p 8080:8081 :P

and sorry if i'm getting my terms mixed up. i just spent all day staring a putty sessions deploying OSes and Clusters and translating solaris commands to linux commands for users.
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
Yes, having an ingress solves 98% of the apps, but it's not the case for unifi, as it requires specifically 8080, 8443 and 3478/udp for stun. Not only that, but it needs to know it's real routable address to communicate it to end devices properly. It's a real pain to set up in k8s properly, moreso than under vanilla docker where one can just throw it's networking under an macvlan driver and be done with it
Thats nonsense. Ive used unifi behind treafik for ages. There exist such a thing as UDP and TCP ingress and ingress is not limited to certain port ranges.
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
@shadofall sorry cant help you on hostNetworking related issues. Its not best practice at all so I dont use it.
 

raskitoma

Dabbler
Joined
Sep 28, 2018
Messages
17
Hi.
I saw TrueNAS Scale as a way of migrating all my Docker containers over from a Ubuntu server, and at the same time have a great storage server (now running 7x2 TB pool).

But are now testing containers, and see that I cant add ports lower than 9000.

Are running multiple containers with ports below 9000, one example is the Unifi controller which needs multiple ports below 9000. I am no Docker specialist, so there are maybe a solution to get around this, but I can't see it.

If I can't run my containers here, I need to scratch my new TrueNAS server and go over to UnRAID or something :(
No matter if this is or not a best practice at all, I don't want to disrespect anyone here who are very well versed with kubernetes. But, as a simple mortal who wants a simple answer... for me the solution is not to use host network, create your own(bridge). With that you can use the ports already exposed by default by the image you are using. (I assume you're deploying containers for your home/soho - media server perhaps?)

I hope this helps.
 
Top