Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Kubernetes Cluster-IP service not working as expected

Ok, so currently I've got kubernetes master up and running on AWS EC2 instance, and a single worker running on my laptop:

$ kubectl get nodes
NAME            STATUS    ROLES     AGE       VERSION
master          Ready     master    34d       v1.9.2
worker          Ready     <none>    20d       v1.9.2

I have created a Deployment using the following configuration:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hostnames
  labels:
    app: hostnames-deployment
spec:
  selector:
    matchLabels:
      app: hostnames
  replicas: 1
  template:
    metadata:
      labels:
        app: hostnames
    spec:
      containers:
      - name: hostnames
        image: k8s.gcr.io/serve_hostname
        ports:
        - containerPort: 9376
          protocol: TCP

The deployment is running:

$ kubectl get deployment
NAME           DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
hostnames      1         1         1            1           1m

A single pod has been created on the worker node:

$ kubectl get pods
NAME                           READY     STATUS    RESTARTS   AGE
hostnames-86b6bcdfbc-v8s8l     1/1       Running   0          2m

From the worker node, I can curl the pod and get the information:

$ curl 10.244.8.5:9376
hostnames-86b6bcdfbc-v8s8l

I have created a service using the following configuration:

kind: Service
apiVersion: v1
metadata:
  name: hostnames-service
spec:
  selector:
    app: hostnames
  ports:
  - port: 80
    targetPort: 9376

The service is up and running:

$ kubectl get svc
NAME                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
hostnames-service   ClusterIP   10.97.21.18      <none>        80/TCP         1m
kubernetes          ClusterIP   10.96.0.1        <none>        443/TCP        34d

As I understand, the service should expose the pod cluster-wide and I should be able to use the service IP to get the information pod is serving from any node on the cluster.

If I curl the service from the worker node it works just as expected:

$ curl 10.97.21.18:80
hostnames-86b6bcdfbc-v8s8l

But if I try to curl the service from the master node located on the AWS EC2 instance, the request hangs and gets timed out eventually:

$ curl -v 10.97.21.18:80
* Rebuilt URL to: 10.97.21.18:80/
*   Trying 10.97.21.18...
* connect to 10.97.21.18 port 80 failed: Connection timed out
* Failed to connect to 10.97.21.18 port 80: Connection timed out
* Closing connection 0
curl: (7) Failed to connect to 10.97.21.18 port 80: Connection timed out

Why can't the request from the master node reach the pod on the worker node by using the Cluster-IP service?

I have read quite a bit of articles regarding kubernetes networking and the official kubernetes services documentation and couldn't find a solution.

like image 431
nikolal Avatar asked Mar 15 '18 10:03

nikolal


2 Answers

Depends of which mode you using it working different in details, but conceptually same.

You trying to connect to 2 different types of addresses - the pod IP address, which is accessible from the node, and the virtual IP address, which is accessible from pods in the Kubernetes cluster.

IP address of the service is not an IP address on some pod or any other subject, that is a virtual address which mapped to pods IP address based on rules you define in service and it managed by kube-proxy daemon, which is a part of Kubernetes.

That address specially desired for communication inside a cluster for make able to access the pods behind a service without caring about how much replicas of pod you have and where it actually working, because service IP is static, unlike pod's IP.

So, service IP address desired to be available from other pod, not from nodes.

You can read in official documentation about how the Service Virtual IPs works.

like image 66
Anton Kostenko Avatar answered Sep 20 '22 03:09

Anton Kostenko


kube-proxy is responsible for setting up the IPTables rules (by default) that route cluster IPs. The Service's cluster IP should be routable from anywhere running kube-proxy. My first guess would be that kube-proxy is not running on the master.

like image 28
dippynark Avatar answered Sep 24 '22 03:09

dippynark