I have a multi-container application: app + sidecar. Both containers suppose to be alive all the time but sidecar is not really that important. Sidecar depends on external resource, if this resource is not available - sidecar crashes. And it takes entire pod down. Kubernetes tries to recreate pod and fails because sidecar now won't start. But from my business logic perspective - crash of sidecar is absolutely normal. Having that sidecar is nice but not mandatory. I don't want sidecar to take main app with it when it crashes. What would be best Kubernetes-native way to achieve that? Is it possible to tell kubernetes ignore failure of sidecar as a "false positive" event which is absolutely fine?
I can't find anything in pod specification what controls that behaviour.
My yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myapp
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: myapp
spec:
volumes:
- name: logs-dir
emptyDir: {}
containers:
- name: myapp
image: ${IMAGE}
ports:
- containerPort: 9009
volumeMounts:
- name: logs-dir
mountPath: /usr/src/app/logs
resources:
limits:
cpu: "1"
memory: "512Mi"
readinessProbe:
initialDelaySeconds: 60
failureThreshold: 8
timeoutSeconds: 1
periodSeconds: 8
httpGet:
scheme: HTTP
path: /myapp/v1/admin-service/git-info
port: 9009
- name: graylog-sidecar
image: digiapulssi/graylog-sidecar:latest
volumeMounts:
- name: logs-dir
mountPath: /log
env:
- name: GS_TAGS
value: "[\"myapp\"]"
- name: GS_NODE_ID
value: "nodeid"
- name: GS_SERVER_URL
value: "${GRAYLOG_URL}"
- name: GS_LIST_LOG_FILES
value: "[\"/ctwf\"]"
- name: GS_UPDATE_INTERVAL
value: "10"
resources:
limits:
memory: "128Mi"
cpu: "0.1"
Warning: the answer that was flagged as "correct" does not appear to work.
Adding a Liveness Probe to the application container and setting Restart Policy to "Never", will lead to the Pod being stopped and never restarted in a scenario where the sidecar container has stopped and the application container has failed its Liveness Probe. This is a problem, since you DO want the restarts for the application container.
The problem should be solved as follows:
| tail -f /dev/null
to the startup command.livenessProbe: Indicates whether the container is running. If the liveness probe fails, the kubelet kills the container, and the container is subjected to its restart policy. If a Container does not provide a liveness probe, the default state is Success. Container Probes
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With