How does routing work within Kubernetes and Scale

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
Does anyone here understand how Kubernetes routes? Or perhaps how its meant to route

I have a container on 172.16.0.y/24 on a Scale host 192.168.38.32/24. I have a DNS Server on 192.168.38.10. My firewall / internet gateway is 192.168.38.15
1663749397908-png.58604

There is a traceroute above from the container to a local DNS Server it shows the leaves scale, hitting the gateway and then being redirected to the DNS Server.

Given the the DNS Server doesn't know anything about the 172.16.0.y/24 network and neither does the gateway. Unless the packet has a source address of 192.168.38.32:port_number then this couldn't work as no packets would ever return to the container.

As far as I am aware the 172.16.0.0/24 address range is NAT'd behind the 192.168.38.32 address so the outgoing packet would create a state table inside K3S so the returning packet would know which container to be directed to. But this means that packet 2 above has a source address of 192.168.38.32:port_number and that it should go direct to 192.168.38.10 as its on net.

This is clearly not happenning.

Anyone care to shed some light on this?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
How does it look on your Apps - Settings - Advanced settings screen (you'll need to do that from the Available Applications page if you're on the current version)?
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
1663769610448.png
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Is that NIC the one linked with that IP?

Is the routing making the same sense when read together with the values there? ( or are you saying routing is contrary to it?)

Also, your attachment of the traceroute isn't visible (at least not for me)... I would try to repeat the same and compare for you if I knew what you were doing exactly. I run a bridged config on my SCALE nodes, so maybe that makes it different.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
I'll redo the traceroute
1663782060428.png

There

Line 1 is fine - the container - which sends the packet to the 172.16.0.1 gateway
I think the packets then get NAT'd behind the 192.168.38.32 address of the scale NAS. So they now have a source address of 192.168.38.32. However they are sent to 192.168.38.15 - the gateway - which then has to redirect to 192.168.38.10 despite 192.168.38.32 being on-net for 192.168.38.10

The firewall has no idea that 172.16.0.0/16 exists - so cannot route to it
My DNS Server has no idea that 172.16.0.0/16 exists - so cannot route to it
So the source address of the packet leaving the NAS MUST be 192.1687.38.32:port_address otherwise traffic could never return to the container

This behaviour, whilst largely irrelavent for DNS traffic, is incorrect and I would have an issue if the container was a database app with a lot of traffic that I would not want hitting the firewall and then being sent back, again across the firewall cable, into the LAN network (or similar)

I logged a ticket with IX - but they don't seem to see the problem

I spotted the issue due to Policy Based Routing on the firewall which prevented the traffic from being sent back to the LAN and was sending it out the WAN side. I have worked around it - but the behaviour must be incorrect

The container I am using is Heimdall as whoever built that built it badly (badly is probably the wrong word here) and left enough inside the container to include things like ping, traceroute etc - which is great for my purposes.

I tried removing the Route v4 Gateway from Kubernetes Settings - but that doesn't work (can't be done)
I also tested this on a second scale box - it does the same

Diagram below - if it helps. The dotted lines MUST be the (incorrect) traffic path
1663783591547.png
 
Last edited:

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
OK, so I tested it and got the same result as you (I had my pfSense gateway as the RouteV4 Gateway).

However... I discovered by means of simple logic that if I change the setting for the Route V4 Gateway to the IP of the node itself, the routing from kubernetes is now direct to other LAN hosts.

But I think that's breaking the routing to the internet... even though sysctl net.ipv4.ip_forward shows routing is enabled, so all apps that want to go outside the LAN for something are somehow broken... I guess you could set a proxy or whatever to fix that.

I think I'll keep running with my pfSense in that spot until somebody convinces me there's a better way.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
easy enough to test.
Definately a bad idea if I want to be able to route to the internet.
OTOH internal routing, on the same network does work
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
I am trying to convince IX that they have a bug - but they don't want to listen
 
Top