I have a Kubernetes JOB
that does database migrations on a CloudSQL database.
One way to access the CloudSQL database from GKE is to use the CloudSQL-proxy container and then connect via localhost
. Great - that's working so far. But because I'm doing this inside a K8s JOB
the job is not marked as successfully finished because the proxy keeps on running.
$ kubectrl get po NAME READY STATUS RESTARTS AGE db-migrations-c1a547 1/2 Completed 0 1m
Even though the output says 'completed' one of the initially two containers is still running - the proxy.
How can I make the proxy exit on completing the migrations inside container 1?
In simple terms set the field shareProcessNamespace to true in pod. spec and all containers now share the process namespace and can see each other. Due to this enablement pkill sleep from the foojob container can kill its sidecar or sidecar's main process.
Sidecar containers are containers that are needed to run alongside the main container. The two containers share resources like pod storage and network interfaces. The sidecar containers can also share storage volumes with the main containers, allowing the main containers to access the data in the sidecars.
At the same time, a Pod can contain more than one container, usually because these containers are relatively tightly coupled.
The best way I have found is to share the process namespace between containers and use the SYS_PTRACE securityContext capability to allow you to kill the sidecar.
apiVersion: batch/v1 kind: Job metadata: name: my-db-job spec: template: spec: restartPolicy: OnFailure shareProcessNamespace: true containers: - name: my-db-job-migrations command: ["/bin/sh", "-c"] args: - | <your migration commands>; sql_proxy_pid=$(pgrep cloud_sql_proxy) && kill -INT $sql_proxy_pid; securityContext: capabilities: add: - SYS_PTRACE - name: cloudsql-proxy image: gcr.io/cloudsql-docker/gce-proxy:1.17 command: - "/cloud_sql_proxy" args: - "-instances=$(DB_CONNECTION_NAME)=tcp:5432"
One possible solution would be a separate cloudsql-proxy deployment with a matching service. You would then only need your migration container inside the job that connects to your proxy service.
This comes with some downsides:
If you want to open cloudsql-proxy to the whole cluster you have to replace tcp:3306
with tcp:0.0.0.0:3306
in the -instance
parameter on the cloudsql-proxy.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With