Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Kubernetes: stop CloudSQL-proxy sidecar container in multi container Pod/Job

I have a Kubernetes JOB that does database migrations on a CloudSQL database.
One way to access the CloudSQL database from GKE is to use the CloudSQL-proxy container and then connect via localhost. Great - that's working so far. But because I'm doing this inside a K8s JOB the job is not marked as successfully finished because the proxy keeps on running.

$ kubectrl get po NAME                      READY     STATUS      RESTARTS   AGE db-migrations-c1a547      1/2       Completed   0          1m 

Even though the output says 'completed' one of the initially two containers is still running - the proxy.

How can I make the proxy exit on completing the migrations inside container 1?

like image 813
Philipp Kyeck Avatar asked Jan 16 '17 15:01

Philipp Kyeck


People also ask

How do you stop a sidecar container?

In simple terms set the field shareProcessNamespace to true in pod. spec and all containers now share the process namespace and can see each other. Due to this enablement pkill sleep from the foojob container can kill its sidecar or sidecar's main process.

How do Kubernetes sidecars work?

Sidecar containers are containers that are needed to run alongside the main container. The two containers share resources like pod storage and network interfaces. The sidecar containers can also share storage volumes with the main containers, allowing the main containers to access the data in the sidecars.

Can a pod have multiple containers?

At the same time, a Pod can contain more than one container, usually because these containers are relatively tightly coupled.


2 Answers

The best way I have found is to share the process namespace between containers and use the SYS_PTRACE securityContext capability to allow you to kill the sidecar.

apiVersion: batch/v1 kind: Job metadata:   name: my-db-job spec:   template:     spec:       restartPolicy: OnFailure       shareProcessNamespace: true       containers:       - name: my-db-job-migrations         command: ["/bin/sh", "-c"]         args:           - |             <your migration commands>;             sql_proxy_pid=$(pgrep cloud_sql_proxy) && kill -INT $sql_proxy_pid;         securityContext:           capabilities:             add:               - SYS_PTRACE       - name: cloudsql-proxy         image: gcr.io/cloudsql-docker/gce-proxy:1.17         command:           - "/cloud_sql_proxy"         args:           - "-instances=$(DB_CONNECTION_NAME)=tcp:5432"            
like image 140
GrokSrc Avatar answered Sep 22 '22 15:09

GrokSrc


One possible solution would be a separate cloudsql-proxy deployment with a matching service. You would then only need your migration container inside the job that connects to your proxy service.

This comes with some downsides:

  • higher network latency, no pod local mysql communication
  • possible security issue if you provide the sql port to your whole kubernetes cluster

If you want to open cloudsql-proxy to the whole cluster you have to replace tcp:3306 with tcp:0.0.0.0:3306 in the -instance parameter on the cloudsql-proxy.

like image 26
Christian Köhler Avatar answered Sep 20 '22 15:09

Christian Köhler