Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Kubernetes: maximum pod lifetime

I use Kubernetes 1.6 and Docker to deploy instances/pods of a microservice.

I have a service that needs to regularly pull continuously updated data from an external repository. This update can be triggered manually on runtime, but the service is unusable during that time. Furthermore, the up-to-date data is always retrieved on startup, so that the freshly started service instance has the most recent external data.

Therefore, I would like to automatically create a fresh pod every hour (or some other frequency), and then kill the old pod.

Conceptually, it seems like I should just configure a maximum lifetime per pod in the deployment, so that Kubernetes starts a new instance/pod and kills the old one if the maximum lifetime has expired, while making sure that there is always at least on pod running. However, Kubernetes does not seem to provide a maximum pod lifetime.

Also, due to the data update during startup, starting the pod takes 1-2 minutes before it gets ready.

like image 434
Carsten Avatar asked Aug 22 '17 09:08

Carsten


2 Answers

This meant to be as a comment but might become an answer. I am posting it as an answer so that the approach is easy to read.

So I have possible approach that might work for you. You run a global download pod which will download the files into specific folder. Let's assume download happens every 1 hour. So you will create a folder like 22-08-2017-20-00 and you create file called latest. The content of this latest file will be 22-08-2017-20-00

The downloader when it is fetching a new update, will create a new folder and download the data to same. Once the data is downloaded, it will change the content of the latest folder to that name.

Now your main app pods will refer to this host volumes, read the file content and use that folder to start the data processing.

Now you should run few replicas. If you setup a cron and restart the pods they will boot fast (no data download) and pickup the latest data. You can do a rolling update by changing a fake parameter with no impact and do a rolling update.

Or you can also set your pods to fail after 1 hour. How to do that? Make sure your image has the timeout command

$ time timeout 2 sleep 12

real    0m2.002s
user    0m0.000s
sys 0m0.000s

Now you don't want all pods to fail at the same time, so you can generate a random number between 50min to 70min and let each pod fail at a different time and be restarted automatically by k8s

See if the approach makes any sense

like image 101
Tarun Lalwani Avatar answered Sep 25 '22 02:09

Tarun Lalwani


Here there is an example that could help you: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/

apiVersion: v1
kind: Pod

metadata:
  labels:
    test: liveness
  name: liveness-exec
spec:
  containers:

  - name: liveness

    args:
    - /bin/sh
    - -c
    - touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600

    image: gcr.io/google_containers/busybox

    livenessProbe:
      exec:
        command:
        - cat
        - /tmp/healthy
      initialDelaySeconds: 5
      periodSeconds: 5

Using health checks you can force to have the container rescheduled after some time. I think that could suit your case.

like image 22
Javier Salmeron Avatar answered Sep 24 '22 02:09

Javier Salmeron