These pods got some hostnames, ie:
From pod drill-staging-75cddd789-kbzsq I cannot resolve host name for drill-staging-75cddd789-amsrj and vice versa. Resolving self pod's name works.
I tried setting various dnsPolicies:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "app.name" . }}
namespace: {{ .Values.global.namespace }}
spec:
replicas: 2
selector:
matchLabels:
app: {{ include "app.name" . }}
template:
metadata:
labels:
app: {{ include "app.name" . }}
spec:
containers:
- name: {{ include "app.name" . }}
image: ...
resources:
...
ports:
...
imagePullPolicy: Always
restartPolicy: Always
Kubernetes assumes that pods can communicate with other pods, regardless of which host they land on. Kubernetes gives every pod its own cluster-private IP address, so you do not need to explicitly create links between pods or map container ports to host ports.
A Pod can communicate with another Pod by directly addressing its IP address, but the recommended way is to use Services. A Service is a set of Pods, which can be reached by a single, fixed DNS name or IP address. In reality, most applications on Kubernetes use Services as a way to communicate with each other.
The pods that share the same ip address appear to be on the same node. In the Kubernetes documenatation it said that "Evert pod gets its own ip address." (https://kubernetes.io/docs/concepts/cluster-administration/networking/).
Pod's hostname and subdomain fields. Currently when a Pod is created, its hostname is the Pod's metadata.name value. The Pod spec has an optional hostname field, which can be used to specify the Pod's hostname.
Normally, only Services get DNS names, not Pods. So, by default, you can't refer to another Pod directly by a domain name, only by its IP address.
Pods get DNS names only under certain condidtions that include a headless Service, as explained in the documentation. In particular, the conditions are:
hostname
fieldsubdomain
fieldsubdomain
field of the PodsIn this case, each Pod gets a fully-qualified domain name of the following form:
my-hostname.my-subdomain.default.svc.cluster.local
Where my-hostname
is the hostname
field of the Pod and my-subdomain
is the subdomain
field of the Pod.
Note: the DNS name is created for the "hostname" of the Pod and not the "name" of the Pod.
You can test this with the following setup:
apiVersion: v1
kind: Service
metadata:
name: my-subdomain
spec:
selector:
name: my-test
clusterIP: None
---
apiVersion: v1
kind: Pod
metadata:
name: my-pod-1
labels:
name: my-test
spec:
hostname: my-hostname-1
subdomain: my-subdomain
containers:
- image: weibeld/ubuntu-networking
command: [sleep, "3600"]
name: ubuntu-networking
---
apiVersion: v1
kind: Pod
metadata:
name: my-pod-2
labels:
name: my-test
spec:
hostname: my-hostname-2
subdomain: my-subdomain
containers:
- image: weibeld/ubuntu-networking
command: [sleep, "3600"]
name: ubuntu-networking
After applying this, you can exec into one of the Pods:
kubectl exec -ti my-pod-1 bash
And you should be able to resolve the fully-qualifed domain names of the two Pods:
host my-hostname-1.my-subdomain.default.svc.cluster.local
host my-hostname-2.my-subdomain.default.svc.cluster.local
Since you're making the requests from the same namespace as the target Pods, you can abbreviate the domain name to:
host my-hostname-1.my-subdomain
host my-hostname-2.my-subdomain
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With