Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Access HTTP service running in GKE from Google Dataflow

I have an HTTP service running on a Google Container Engine cluster (behind a kubernetes service).

My goal is to access that service from a Dataflow job running on the same GCP project using a fixed name (in the same way services can be reached from inside GKE using DNS). Any idea?


  • Most solutions I have read on stackoverflow relies on having kube-proxy installed on the machines trying to reach the service. As far as I know, it is not possible to reliably set up that service on every worker instance created by Dataflow.
  • One option is to create an external balancer and create an A record in the public DNS. Although it works, I would rather not have an entry in my public DNS records pointing to that service.
like image 819
Thomas Avatar asked Mar 05 '16 08:03

Thomas


2 Answers

EDIT: this is now supported on GKE (now known as Kubernetes Engine): https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing

I have implemented this in a pretty smooth way IMHO. I will try to walk through briefly how it works:

  • Remember that when you create a container cluster (or nodepool), it will consist of a set of GCE instances in an instance group that is a part of the default network. NB: add a specific GCE network tag(s) so that you can add only those instances to a firewall rule later for letting load balancer check instance health.
  • This instance group is just a regular instance group.
  • Now, remember that kubernetes has something called NodePort, which will expose the service at this port on all nodes, i.e all GCE instances in your cluster. This is what we want!
  • Now that we know that we have a set of GCE instances in an instance group we can then add this instance group to an internal load balancer in your default network without it needing to know anything about kubernetes internals or DNS.
  • The Guide which you can follow, skipping many of the initial steps is here: https://cloud.google.com/compute/docs/load-balancing/internal/
  • Remember that this works for regions, so dataflow and everything else must be in the same region.

See this spec for the service:

kind: Service
  apiVersion: v1
metadata:
  name: name
  labels:
    app: app
spec:
  selector:
    name: name
    app: app
    tier: backend
  ports:
  - name: health
    protocol: TCP
   enter code here port: 8081
    nodePort: 30081
  - name: api
    protocol: TCP
    port: 8080
    nodePort: 30080
  type: NodePort

This is the code for setting up the load balancer with health checks, forwarding rules and firewall that it needs to work:

_region=<THE_REGION>
_instance_group=<THE_NODE_POOL_INSTANCE_GROUP_NAME>
#Can be different for your case
_healtcheck_path=/liveness
_healtcheck_port=30081
_healtcheck_name=<THE_HEALTCHECK_NAME>
_port=30080
_tags=<TAGS>
_loadbalancer_name=internal-loadbalancer-$_region
_loadbalancer_ip=10.240.0.200

gcloud compute health-checks create http $_healtcheck_name \
  --port $_healtcheck_port \
  --request-path $_healtcheck_path

gcloud compute backend-services create $_loadbalancer_name \
  --load-balancing-scheme internal \
  --region $_region \
  --health-checks $_healtcheck_name

gcloud compute backend-services add-backend $_loadbalancer_name \
  --instance-group $_instance_group \
  --instance-group-zone $_region-a \
  --region $_region

gcloud compute forwarding-rules create $_loadbalancer_name-forwarding-rule \
  --load-balancing-scheme internal \
  --ports $_port \
  --region $_region \
  --backend-service $_loadbalancer_name \
  --address $_loadbalancer_ip
#Allow google cloud to healthcheck your instance
gcloud compute firewall-rules create allow-$_healtcheck_name \
  --source-ranges 130.211.0.0/22,35.191.0.0/16 \
  --target-tags $_tags \
  --allow tcp
like image 109
martin.code Avatar answered Sep 24 '22 12:09

martin.code


Lukasz's answer is probably the most straightforward way to expose your service to dataflow. But, if you really don't want a public IP and DNS record, you can use a GCE route to deliver traffic to your cluster's private IP range (something like option 1 in this answer).

This would let you hit your service's stable IP. I'm not sure how to get Kubernetes' internal DNS to resolve from Dataflow.

like image 35
CJ Cullen Avatar answered Sep 24 '22 12:09

CJ Cullen