Inter Pod network communication

StanAccy

Dabbler
Joined
Apr 23, 2021
Messages
20
Should two deployed pods (via two separate docker images) be able to communicate with each other? If so, how?

I've deployed a 'db' image and an application container, each has its own pod, in its own namespace. I see two services listed in kubectl, one for each of the deployed containers, with the 'db' container being named 'db-ix-chart', but the application container is unable to resolve either 'db' or 'db-ix-chart' as a hostname.

How are pods supposed to communicate with each other?
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
As explained to you in the TrueCharts discord:
Either by using nodePort and linking to the port on the host or using the internal service domainName.
 

waqarahmed

iXsystems
iXsystems
Joined
Aug 28, 2019
Messages
136
@StanAccy you will need to add namespace as well of the application you want to talk to as well to ensure that kubernetes internal DNS is able to resolve the service. So for example you have an app named "db", if you want to consume the service in "db", you would need to use "db-ix-chart.ix-db" as hostname with the last entry being the namespace in which the "db" service is located in.

Otherwise you can use nodeport as @ornias pointed out.
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
@waqarahmed Thats what I refered to with " using the internal service domainName " we already explained it all to him ;-)
 

StanAccy

Dabbler
Joined
Apr 23, 2021
Messages
20
@StanAccy you will need to add namespace as well of the application you want to talk to as well to ensure that kubernetes internal DNS is able to resolve the service. So for example you have an app named "db", if you want to consume the service in "db", you would need to use "db-ix-chart.ix-db" as hostname with the last entry being the namespace in which the "db" service is located in.

Otherwise you can use nodeport as @ornias pointed out.

Thanks for the helpful response - I've tried this and followed the docs I was pointed to, but I cannot get name resolution to work in my situation.

Here's the output of the system shell:

truenas# k3s kubectl get service --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 172.17.0.1 <none> 443/TCP 30d
kube-system kube-dns ClusterIP 172.17.0.10 <none> 53/UDP,53/TCP,9153/TCP 30d
ix-unifi unifi-ix-chart NodePort 172.17.140.9 <none> 8443:9400/TCP 30d
ix-plex plex-tcp-cluster-ip ClusterIP 172.17.90.166 <none> 32400/TCP,80/TCP,443/TCP,1900/TCP 30d
ix-plex plex-udp ClusterIP 172.17.89.217 <none> 1900/UDP,32410/UDP,32412/UDP,32413/UDP,32414/UDP 30d
ix-home-assistant home-assistant NodePort 172.17.60.127 <none> 8123:36007/TCP 22d
ix-shinobi shinobi-ix-chart NodePort 172.17.119.2 <none> 8080:9080/TCP 5h4m
ix-db db-ix-chart NodePort 172.17.54.132 <none> 3306:9306/TCP 125m


My database application is named 'db', in the ix-db namespace with service name of db-ix-chart. Based on what you said above, and what I read at https://truecharts.org/manual/linking/ I should be able to ping this from my application (shinobi) container:

root@shinobi-ix-chart-6bff9f5598-6fzv9:/opt/shinobi# ping db-ix-chart.ix-db
ping: db-ix-chart.ix-db: Name or service not known


I can ping google from that container though so name resolution is working for external stuff, just not the Kubernetes names.
 

waqarahmed

iXsystems
iXsystems
Joined
Aug 28, 2019
Messages
136
@StanAccy can you please email me debug of your system at waqar at the rate of ixsystems.com ? It works for me if i try it locally, it's possible there might be some other factor in your network which might be influencing it - i can say more once i have the debug :)
 

StanAccy

Dabbler
Joined
Apr 23, 2021
Messages
20
Sure - although after a quick look around the UI, Im not sure how to trigger a debug file gen (i found it - the big blue "Save Debug" button). On its way shortly.
 

waqarahmed

iXsystems
iXsystems
Joined
Aug 28, 2019
Messages
136
@StanAccy can you please confirm if you are able to reproduce if you run a pod with "k3s kubectl run busybox --rm -ti --image=busybox -- /bin/sh" ? You are not able to do so because the pod you are using to ping is using host network which means it won't be able to resolve kubernetes internal DNS.
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
@StanAccy can you please confirm if you are able to reproduce if you run a pod with "k3s kubectl run busybox --rm -ti --image=busybox -- /bin/sh" ? You are not able to do so because the pod you are using to ping is using host network which means it won't be able to resolve kubernetes internal DNS.
To be clear: I doubt this is TrueNAS related, we at TrueCharts have multiple users confirming internal DNS between pods and services is working.
 

StanAccy

Dabbler
Joined
Apr 23, 2021
Messages
20
@StanAccy can you please confirm if you are able to reproduce if you run a pod with "k3s kubectl run busybox --rm -ti --image=busybox -- /bin/sh" ? You are not able to do so because the pod you are using to ping is using host network which means it won't be able to resolve kubernetes internal DNS.

Its not set to use host networking - i just double checked in the UI - that box is *not* checked. Am I missing something here?

Running the ping from your demo container as listed above does resolve the service/host:

/ # ping db-ix-chart.ix-db
PING db-ix-chart.ix-db (172.17.54.132): 56 data bytes
64 bytes from 172.17.54.132: seq=0 ttl=64 time=0.087 ms

This same ping command fails from my application container.



1622158093266.png


I tried adding the kube dns service to the list of nameservers but that didnt make a difference.

So you say this container is using host network - what am I missing here?
 

waqarahmed

iXsystems
iXsystems
Joined
Aug 28, 2019
Messages
136
@StanAccy can you please share the output of "midclt call chart.release.get_instance shinobi | jq", it's possible it's a UI bug ( if there's anything sensitive in the configuration, please mask it ). Thank you
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
@StanAccy yes: Get rid of that manually set Nameserver and try again.

Also: it's important to note what version of SCALE you are running.
 

StanAccy

Dabbler
Joined
Apr 23, 2021
Messages
20
```
truenas# midclt call chart.release.get_instance shinobi | jq
{
"name": "shinobi",
"info": {
"first_deployed": "2021-05-25T09:37:07.763670733-06:00",
"last_deployed": "2021-05-25T13:45:21.554232157-06:00",
"deleted": "",
"description": "Upgrade complete",
"status": "deployed",
"notes": "1. Get the application URL by running these commands:\n\n"
},
"config": {
"containerArgs": [],
"containerCommand": [],
"containerEnvironmentVariables": [
{
"name": "MYSQL_USER",
"value": "mysql_user"
},
{
"name": "MYSQL_HOST",
"value": "db.ix-db"
},
{
"name": "MYSQL_DATABASE",
"value": "ccio"
},
{
"name": "MOTION_HOST",
"value": "localhost"
},
{
"name": "MOTION_PORT",
"value": "8080"
},
{
"name": "MYSQL_PORT",
"value": "9306"
}
],
"dnsConfig": {
"nameservers": [
"172.17.0.10"
],
"searches": []
},
"dnsPolicy": "Default",
"externalInterfaces": [],
"gpuConfiguration": {
},
"hostNetwork": false,
"hostPathVolumes": [
{
"hostPath": "/mnt/TestPool/containers/shinobi",
"mountPath": "/config",
"readOnly": false
},
{
"hostPath": "/mnt/TestPool/containers/shinobi",
"mountPath": "/opt/shinobi/videos",
"readOnly": false
}
],
"image": {
"pullPolicy": "IfNotPresent",
"repository": "migoller/shinobidocker",
"tag": "microservice-debian"
},
"ixCertificateAuthorities": {},
"ixCertificates": {
},
"ixChartContext": {
"isInstall": false,
"isUpdate": true,
"isUpgrade": false,
"operation": "UPDATE",
"storageClassName": "ix-storage-class-shinobi",
"upgradeMetadata": {}
},
"ixExternalInterfacesConfiguration": [],
"ixExternalInterfacesConfigurationNames": [],
"ixVolumes": [],
"livenessProbe": null,
"portForwardingList": [
{
"containerPort": 8080,
"nodePort": 9080,
"protocol": "TCP"
}
],
"securityContext": {
"privileged": false
},
"updateStrategy": "RollingUpdate",
"volumes": [],
"workloadType": "Deployment"
},
"hooks": [
{
"name": "shinobi-deployment-test",
"kind": "Pod",
"path": "ix-chart/templates/tests/deployment-check.yaml",
"manifest": "apiVersion: v1\nkind: Pod\nmetadata:\n name: \"shinobi-deployment-test\"\n annotations:\n \"helm.sh/hook\": test\nspec:\n containers:\n - name: shinobi-deployment-test\n image: \"busybox\"\n command:\n - nc\n args:\n - \"-vz\"\n - \"shinobi-ix-chart\"\n - \"80\"\n restartPolicy: Never",
"events": [
"test"
],
"last_run": {
"started_at": "",
"completed_at": "",
"phase": ""
}
}
],
"version": 8,
"namespace": "ix-shinobi",
"chart_metadata": {
"name": "ix-chart",
"version": "2104.0.0",
"description": "A Helm chart for deploying simple workloads Kubernetes",
"apiVersion": "v2",
"appVersion": "v1",
"dependencies": [
{
"name": "common",
"version": "2101.0.0",
"repository": "file://../../../library/common/2101.0.0",
"enabled": true
}
],
"type": "application",
"latest_chart_version": "2104.0.0",
"icon": null
},
"id": "shinobi",
"catalog": "OFFICIAL",
"catalog_train": "charts",
"path": "/mnt/TestPool/ix-applications/releases/shinobi",
"dataset": "TestPool/ix-applications/releases/shinobi",
"status": "ACTIVE",
"used_ports": [
{
"port": 9080,
"protocol": "TCP"
}
],
"pod_status": {
"desired": 1,
"available": 1
},
"update_available": false,
"human_version": "microservice-debian_2104.0.0",
"human_latest_version": "microservice-debian_2104.0.0",
"container_images_update_available": false,
"portals": {
}
}

```
 

StanAccy

Dabbler
Joined
Apr 23, 2021
Messages
20
@StanAccy yes: Get rid of that manually set Nameserver and try again.

Also: it's important to note what version of SCALE you are running.

Nameserver setting has no impact - I added that as a test to see if it had any impact - it seems to be an additive process to `/etc/resolv.conf`.

Scale version is TrueNAS-SCALE-21.04-ALPHA.1 - I'm assuming the debug file I sent to @waqarahmed contained that information.
 

waqarahmed

iXsystems
iXsystems
Joined
Aug 28, 2019
Messages
136
@StanAccy can we please schedule a team viewer session to debug this ? Looking at configuration, it seems good. If yes, can you please email me days with time slots when you are available and we can tackle this ? ( with timezone please ) Thank you
 

StanAccy

Dabbler
Joined
Apr 23, 2021
Messages
20
With latest TrueNAS-SCALE-21.08-BETA.2 , this still isnt working - Im unable to ping one container from the other and Ive double checked the UI for these two containers to see if there were any UI bugs fixed that were incorrectly reflecting the networking status.
 
Joined
Nov 17, 2021
Messages
4
Same here, I can't reach my service even on the itself; I launched a docker image named `nextcloud-postgresql`, but it don't resolve `nextcloud-postgresql-it-chart.ix-nextcloud-postgresql`.
 
Joined
Nov 17, 2021
Messages
4
Same here, I can't reach my service even on the itself; I launched a docker image named `nextcloud-postgresql`, but it don't resolve `nextcloud-postgresql-it-chart.ix-nextcloud-postgresql`.
Sorry for the typo, it's `nextcloud-postgresql-ix-chart.ix-nextcloud-postgresql`.
 

rmr

Dabbler
Joined
Sep 8, 2021
Messages
17
In order to reach one pod's ("A") service from another ("B"), there are (at least) two requirements:
1. Pod A must have its DNS resolution set to use the Kubernetes internal DNS.
2. Pod B must have its service exposed. You can check using "k3s kubectl get svc -A -o wide" on the command line.
When creating a pod by "Launching a Docker Image", the service gets exposed as "type: NodePort" only when you set a port in "port forwarding". There does not seem to be a way to expose a service as "ClusterIP" in the GUI for launching docker images.

Once you've done this, you can reach pod B from pod A using the 'internal' port and the DNS name. For example, after launching a docker named "simpleweb" that services port 80, forwarded as port 9080, you can reach that service on port 80 using the name "simpleweb-ix-chart.ix-simpleweb" (and its full DNS name would be "simpleweb-ix-chart.ix-simpleweb.svc.cluster.local").
 
Top