I have a kubernetes single-node setup (see https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant-single.html )
I have a service and an replication controller creating pods. Those pods need to connect to the other pods in the same service (Note: this is ultimately so that I can get mongo running w/replica sets (non localhost), but this simple example demonstrates the problem that mongo has).
When I connect from any node to the service, it will be distributed (as expected) to one of the pods. This will work until it load balances to itself (the container that I am on). Then it fails to connect.
Sorry to be verbose, but I am going to attach all my files so that you can see what I'm doing in this little example.
Dockerfile:
FROM ubuntu
MAINTAINER Eric H
RUN apt-get update; apt-get install netcat
EXPOSE 8080
COPY ./entry.sh /
ENTRYPOINT ["/entry.sh"]
Here is the entry point
#!/bin/bash
# wait for a connection, then tell them who we are
while : ; do
echo "hello, the date=`date`; my host=`hostname`" | nc -l 8080
sleep .5
done
build the dockerfile
docker build -t echoserver .
tag and upload to my k8s cluster's registry
docker tag -f echoserver:latest 127.0.0.1:5000/echoserver:latest
docker push 127.0.0.1:5000/echoserver:latest
Here is my Replication Controller
apiVersion: v1
kind: ReplicationController
metadata:
labels:
role: echo-server
app: echo
name: echo-server-1
spec:
replicas: 3
template:
metadata:
labels:
entity: echo-server-1
role: echo-server
app: echo
spec:
containers:
-
image: 127.0.0.1:5000/echoserver:latest
name: echo-server-1
ports:
- containerPort: 8080
And finally, here is my Service
kind: Service
metadata:
labels:
app: echo
role: echo-server
name: echo-server-1
name: echo-server-1
spec:
selector:
entity: echo-server-1
role: echo-server
ports:
- port: 8080
targetPort: 8080
Create my service
kubectl create -f echo.service.yaml
Create my rc
kubectl create -f echo.controller.yaml
Get my PODs
kubectl get po
NAME READY STATUS RESTARTS AGE
echo-server-1-jp0aj 1/1 Running 0 39m
echo-server-1-shoz0 1/1 Running 0 39m
echo-server-1-y9bv2 1/1 Running 0 39m
Get the service IP
kubectl get svc
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
echo-server-1 10.3.0.246 <none> 8080/TCP entity=echo-server-1,role=echo-server 39m
Exec into one of the pods
kubectl exec -t -i echo-server-1-jp0aj /bin/bash
Now connect to the service multiple times... It will give me the app-message for all pods except for when it gets to itself, whereupon it hangs.
root@echo-server-1-jp0aj:/# nc 10.3.0.246 8080
hello, the date=Mon Jan 11 22:02:38 UTC 2016; my host=echo-server-1-y9bv2
root@echo-server-1-jp0aj:/# nc 10.3.0.246 8080
^C
root@echo-server-1-jp0aj:/# nc 10.3.0.246 8080
hello, the date=Mon Jan 11 22:02:43 UTC 2016; my host=echo-server-1-shoz0
root@echo-server-1-jp0aj:/# nc 10.3.0.246 8080
^C
root@echo-server-1-jp0aj:/# nc 10.3.0.246 8080
hello, the date=Mon Jan 11 22:31:19 UTC 2016; my host=echo-server-1-y9bv2
root@echo-server-1-jp0aj:/# nc 10.3.0.246 8080
hello, the date=Mon Jan 11 22:31:23 UTC 2016; my host=echo-server-1-shoz0
root@echo-server-1-jp0aj:/# nc 10.3.0.246 8080
hello, the date=Mon Jan 11 22:31:26 UTC 2016; my host=echo-server-1-y9bv2
root@echo-server-1-jp0aj:/# nc 10.3.0.246 8080
hello, the date=Mon Jan 11 22:31:27 UTC 2016; my host=echo-server-1-shoz0
root@echo-server-1-jp0aj:/# nc 10.3.0.246 8080
How can I configure things so that all members of a service can connect to all other members, including itself?
In Kubernetes, each Pod has an IP address. A Pod can communicate with another Pod by directly addressing its IP address, but the recommended way is to use Services. A Service is a set of Pods, which can be reached by a single, fixed DNS name or IP address.
Within a Pod, containers share an IP address and port space, and can find each other via localhost . The containers in a Pod can also communicate with each other using standard inter-process communications like SystemV semaphores or POSIX shared memory.
In other words, if you need to run a single container in Kubernetes, then you need to create a Pod for that container. At the same time, a Pod can contain more than one container, usually because these containers are relatively tightly coupled.
Run a pod, and then connect to a shell in it using kubectl exec. Connect to other nodes, pods, and services from that shell. Some clusters may allow you to ssh to a node in the cluster. From there you may be able to access cluster services.
Thanks to all those who helped on GitHub.
The workaround turned out to be as follows:
tanen01 commented on Feb 4 Seeing the same problem here on k8s v1.1.7 stable
Issue occurs with:
kube-proxy --proxy-mode=iptables
Once I changed it to:
--proxy-mode=userspace
(also the default), then it works again.
So, if you are experiencing this, please try turning off --proxy-mode
when you start kube-proxy
.
I have seen this reported by at least one other user. I filed an issue: https://github.com/kubernetes/kubernetes/issues/20475
I assume you used the version of Kubernetes from that link -- 1.1.2.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With