I have tried to run Helm for the first time. I am having deployment.yaml, service.yaml and ingress.yaml files alongwith values.yaml and chart.yaml.
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: abc
namespace: xyz
labels:
app: abc
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: 3
template:
spec:
containers:
- name: abc
image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
ports:
-
containerPort: 8080
service.yaml
apiVersion: v1
kind: Service
metadata:
name: abc
labels:
app.kubernetes.io/managed-by: {{ .Release.Service }}
namespace: xyz
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: {{ .Values.service.sslCert }}
spec:
ports:
- name: https
protocol: TCP
port: 443
targetPort: 8080
- name: http
protocol: TCP
port: 80
targetPort: 8080
type: ClusterIP
selector:
app: abc
ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "haproxy-ingress"
namespace: xyz
labels:
app.kubernetes.io/managed-by: {{ .Release.Service }}
annotations:
kubernetes.io/ingress.class: alb
From what I can see I do not think I have missed putting app.kubernetes.io/managed-by
but still, I keep getting an error:
rendered manifests contain a resource that already exists. Unable to continue with install: Service "abc" in namespace "xyz" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "abc"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "default"
It renders the file locally correctly.
helm list --all --all-namespaces
returns nothing.
Please help.
You already have some resources, e.g. service abc
in the given namespace, xyz
that you're trying to install via a Helm chart.
Delete those and install them via helm install
.
$ kubectl delete service -n <namespace> <service-name>
$ kubectl delete deployment -n <namespace> <deployment-name>
$ kubectl delete ingress -n <namespace> <ingress-name>
Once you have these resources deployed via Helm, you will be able to perform helm update
to change properties.
Remove the "app.kubernetes.io/managed-by"
label from your yaml's, this will be added by Helm.
The error below is quiet common:
label validation error: missing key "app.kubernetes.io/managed-by":
must be set to "Helm"; annotation validation error: missing key
"meta.helm.sh/release-name": must be set to ..
So I'll provide a bit longer explanation and also a context to the topic.
It seems that you tried to create resources that were already exist and created outside of Helm (probably with kubectl).
Helm doesn't allow a resource to be owned by more than one deployment.
It is the responsibility of the chart creator to ensure that the chart produce unique resources only.
Option 1 - Follow the error message and add the meta.helm.sh
annotations:
As can be describe in this PR: Adopt resources into release with correct instance and managed-by labels
Helm will no longer error when attempting to create a resource that already exists in the target cluster if the existing resource has the correct
meta.helm.sh/release-name
andmeta.helm.sh/release-namespace
annotations, and matches the label selectorapp.kubernetes.io/managed-by=Helm
.
This facilitates zero-downtime migrations to Helm 3 for managing existing deployments, and allows Helm to "adopt" existing resources that it previously created.
(*) I think that the meta.helm.sh
scope is a less common approach today.
Option 2 - Add the app.kubernetes.io/instance
label:
As can be seen in different Helm chart providers (Bitnami, Nginx ingress controller, External-Dns for example) - the combination of the two labels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
(*) Notice: There are some CD tools like ArgoCD that automatically sets the app.kubernetes.io/instance
label and uses it to determine which resources form the app.
Option 3 - Delete old resources.
It might be relevant in your specific case where the old resources might not be relevant anymore.
Shared labels and annotations share a common prefix: app.kubernetes.io
.
Labels without a prefix are private to users. The shared prefix ensures that shared labels do not interfere with custom user labels.
In order to take full advantage of using these labels, they should be applied on every resource object.
The app.kubernetes.io/managed-by
label is used to describe the tool being used to manage the operation of an application - for example: helm.
Read more on the Recommended Labels section.
No.
First of all, as mentioned before, those labels are not specific to Helm and Helm itself never requires that a particular label be present.
From the other hand, Helm docs recommend to use the following Standard Labels. app.kubernetes.io/managed-by
is one of them and should be set to {{ .Release.Service }}
in order to find all resources managed by Helm.
So it is the role of the chart maintainer to add those labels.
Many Helm chart providers adds them to the _helpers.tpl
file and let all resources include
it:
labels: {{ include "my-chart.labels" . | nindent 4 }}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With