Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to call a service exposed by a Kubernetes cluster from another Kubernetes cluster in same project

Tags:

I have two service, S1 in cluster K1 and S2 in cluster K2. They have different hardware requirements. Service S1 needs to talk to S2.

I don't want to expose Public IP for S2 due to security reasons. Using NodePorts on K2 cluster's compute instances with network load-balancing takes the flexibility out as I would have to add/remove K2's compute instances in target pool each time a node is added/removed in K2.

Is there something like "service-selector" for automatically updating target-pool? If not, is there any other better approach for this use-case?

like image 515
Sunil Kumar Avatar asked Jul 27 '15 21:07

Sunil Kumar


People also ask

How do you communicate between two services in Kubernetes?

Summary. In Kubernetes, pods can communicate with each other a few different ways: Containers in the same Pod can connect to each other using localhost , and then the port number exposed by the other container. A container in a Pod can connect to another Pod using its IP address.

How do I access Kubernetes service from outside cluster?

Ways to connect You have several options for connecting to nodes, pods and services from outside the cluster: Access services through public IPs. Use a service with type NodePort or LoadBalancer to make the service reachable outside the cluster. See the services and kubectl expose documentation.


2 Answers

I can think of a couple of ways to access services across multiple clusters connected to the same GCP private network:

  1. Bastion route into k2 for all of k2's services:

    Find the SERVICE_CLUSTER_IP_RANGE for the k2 cluster. On GKE, it will be the servicesIpv4Cidr field in the output of cluster describe:

    $ gcloud beta container clusters describe k2 ... servicesIpv4Cidr: 10.143.240.0/20 ... 

    Add an advanced routing rule to take traffic destined for that range and route it to a node in k2:

    $ gcloud compute routes create --destination-range 10.143.240.0/20 --next-hop-instance k2-node-0 

    This will cause k2-node-0 to proxy requests from the private network for any of k2's services. This has the obvious downside of giving k2-node-0 extra work, but it is simple.

  2. Install k2's kube-proxy on all nodes in k1.

    Take a look at the currently running kube-proxy on any node in k2:

    $ ps aux | grep kube-proxy ... /usr/local/bin/kube-proxy --master=https://k2-master-ip --kubeconfig=/var/lib/kube-proxy/kubeconfig --v=2 

    Copy k2's kubeconfig file to each node in k1 (say /var/lib/kube-proxy/kubeconfig-v2) and start a second kube-proxy on each node:

    $ /usr/local/bin/kube-proxy --master=https://k2-master-ip --kubeconfig=/var/lib/kube-proxy/kubeconfig-k2 --healthz-port=10247 

    Now, each node in k1 handles proxying to k2 locally. A little tougher to set up, but has better scaling properties.

As you can see, neither solution is all that elegant. Discussions are happening about how this type of setup should ideally work in Kubernetes. You can take a look at the Cluster Federation proposal doc (specifically the Cross Cluster Service Discovery section), and join the discussion by opening up issues/sending PRs.

like image 54
CJ Cullen Avatar answered Sep 24 '22 08:09

CJ Cullen


GKE now supports Internal Load Balancers: https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing

It's primary use case is to have a load balancer that's not exposed to the public internet so a service running on GKE can be reached from other GCE VMs or other GKE clusters in the same network.

like image 33
ahmet alp balkan Avatar answered Sep 21 '22 08:09

ahmet alp balkan