I have two containers inside one pod. One is my application container and the second is a CloudSQL proxy container. Basically my application container is dependent on this CloudSQL container.
The problem is that when a pod is terminated, the CloudSQL proxy container is terminated first and only after some seconds my application container is terminated.
So, before my container is terminated, it keeps sending requests to the CloudSQL container, resulting in errors:
could not connect to server: Connection refused Is the server running on host "127.0.0.1" and accepting TCP/IP connections on port 5432
That's why, I thought it would be a good idea to specify the order of termination, so that my application container is terminated first and only then the cloudsql one.
I was unable to find anything that could do this in the documentation. But maybe there is some way.
Kubernetes marks the Pod state as "Terminating" and stops sending traffic to the Pod. Kubernetes send a TERM signal to the Pod, indicating that the Pod should shut down. When the grace period expires, Kubernetes issues a SIGKILL to any processes still running in the Pod.
You can see the logs of a particular container by running the command kubectl logs <container name> .
Note: When a Pod is being deleted, it is shown as Terminating by some kubectl commands. This Terminating status is not one of the Pod phases. A Pod is granted a term to terminate gracefully, which defaults to 30 seconds. You can use the flag --force to terminate a Pod by force.
This is not directly possible with the Kubernetes pod API at present. Containers may be terminated in any order. The Cloud SQL pod may die more quickly than your application, for example if it has less cleanup to perform or fewer in-flight requests to drain.
From Termination of Pods:
When a user requests deletion of a pod, the system records the intended grace period before the pod is allowed to be forcefully killed, and a TERM signal is sent to the main process in each container.
You can get around this to an extent by wrapping the Cloud SQL and main containers in different entrypoints, which communicate their exit status between each other using a shared pod-level file system.
This solution will not work with the 1.16 release of the Cloud SQL proxy (see comments) as this release ceased to bundle a shell with the container. The 1.17 release is now available in Alpine or Debian Buster variants, so this version is now a viable upgrade target which is once again compatible with this solution.
A wrapper like the following may help with this:
containers:
- command: ["/bin/bash", "-c"]
args:
- |
trap "touch /lifecycle/main-terminated" EXIT
<your entry point goes here>
volumeMounts:
- name: lifecycle
mountPath: /lifecycle
- name: cloudsql_proxy
image: gcr.io/cloudsql-docker/gce-proxy
command: ["/bin/bash", "-c"]
args:
- |
/cloud_sql_proxy <your flags> &
PID=$!
function stop {
while true; do
if [[ -f "/lifecycle/main-terminated" ]]; then
kill $PID
fi
sleep 1
done
}
trap stop EXIT
# We explicitly call stop to ensure the sidecar will terminate
# if the main container exits outside a request from Kubernetes
# to kill the Pod.
stop &
wait $PID
volumeMounts:
- name: lifecycle
mountPath: /lifecycle
You'll also need a local scratch space to use for communicating lifecycle events:
volumes:
- name: lifecycle
emptyDir:
How does this solution work? It intercepts in the Cloud SQL proxy container the SIGTERM
signal passed by the Kubernetes supervisor to each of your pod's containers on shutdown. The "main process" running in that container is a shell, which has spawned a child process running the Cloud SQL proxy. Thus, the Cloud SQL proxy is not immediately terminated. Rather, the shell code blocks waiting for a signal (by simple means of a file appearing in the file system) from the main container that it has successfully exited. Only at that point is the Cloud SQL proxy process terminated and the sidecar container returns.
Of course, this has no effect on forced termination in the event your containers take too long to shutdown and exceed the configured grace period.
The solution depends on the containers you are running having a shell available to them; this is true of the Cloud SQL proxy (except 1.16, and 1.17 onwards when using the alpine
or debian
variants), but you may need to make changes to your local container builds to ensure this is true of your own application containers.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With