Getting this error message after kubectl apply -f .
error: error converting YAML to JSON: yaml: invalid map key: map[interface {}]interface {}{"include (print $.Template.BasePath \"/configmap.yaml\") . | sha256sum":interface {}(nil)}
I've tried putting checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
in different places, but I don't really understand YAML or JSON to figure out the issue.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: demo
spec:
replicas: 1
selector:
matchLabels:
app: demo
template:
metadata:
labels:
app: demo
spec:
containers:
- name: demo
image: cloudnatived/demo:hello-config-env
ports:
- containerPort: 8888
env:
- name: GREETING
valueFrom:
configMapKeyRef:
name: demo-config
key: greeting
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
I just want to be able to update my pods when the config is changed. I'm supposed to helm upgrade
here somewhere but I'm not sure what arguments to give it.
You can't use the {{ ... }}
syntax with kubectl apply
. That syntax generally matches the Helm package manager. Without knowing to apply the template syntax, { ... }
looks like YAML map syntax, and the parser gets confused.
annotations:
generally belong under metadata:
, next to labels:
. Annotations in the Kubernetes documentation might be useful reading.
I just want to be able to update my pods without restarting them.
Kubernetes doesn't work that way, with some very limited exceptions.
If you're only talking about configuration data and not code, you can Add ConfigMap data to a Volume; then if the ConfigMap changes, the files the pod sees will also change. The syntax you're stumbling over is actually a workaround to force a pod to restart when the ConfigMap data changes: it is the opposite of what you're trying for, and you should delete these two lines.
For routine code changes, the standard path is to build and push a new Docker image, then update your Deployment object with the new image tag. (It must be a different image tag string than you had before, just pushing a new image with the same tag isn't enough.) Then Kubernetes will automatically start new pods with the new image, and once those start up, shut down pods with the old image. Under some circumstances Kubernetes can even delete and recreate pods on its own.
Simplest way to resolve this kind of issues is to use tools.
These are mostly indentation issues, and can be resolved very easily using the right tool
npm install -g yaml-lint
yaml-lint is one such tool
D:\vsc-workspaces\grafana-1> yamllint grafana.yaml
× YAML Lint failed for D:\vsc-workspaces\grafana-1/grafana.yaml
× bad indentation of a mapping entry at line 137, column 11:
restartPolicy: Always
^
D:\vsc-workspaces\grafana-1> yamllint grafana.yaml
√ YAML Lint successful.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With