Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Invalid spec selector after upgrading helm template

Tags:

I've upgraded helm templates (by hand)

Fragment of previous depoloyment.yaml:

apiVersion: apps/v1beta2 kind: Deployment metadata:   name: {{ template "measurement-collector.fullname" . }}   labels:     app: {{ template "measurement-collector.name" . }}     chart: {{ template "measurement-collector.chart" . }}     release: {{ .Release.Name }}     heritage: {{ .Release.Service }} spec:   replicas: {{ .Values.replicaCount }}   selector:     matchLabels:       app: {{ template "measurement-collector.name" . }}       release: {{ .Release.Name }} 

New one:

apiVersion: apps/v1beta2 kind: Deployment metadata:   name: {{ include "measurement-collector.fullname" . }}   labels:     app.kubernetes.io/name: {{ include "measurement-collector.name" . }}     helm.sh/chart: {{ include "measurement-collector.chart" . }}     app.kubernetes.io/instance: {{ .Release.Name }}     app.kubernetes.io/managed-by: {{ .Release.Service }} spec:   replicas: {{ .Values.replicaCount }}   selector:     matchLabels:       app.kubernetes.io/name: {{ include "measurement-collector.name" . }}       app.kubernetes.io/instance: {{ .Release.Name }} 

new service.yaml:

  name: {{ include "measurement-collector.fullname" . }}   labels:     app.kubernetes.io/name: {{ include "measurement-collector.name" . }}     helm.sh/chart: {{ include "measurement-collector.chart" . }}     app.kubernetes.io/instance: {{ .Release.Name }}     app.kubernetes.io/managed-by: {{ .Release.Service }} spec:   type: {{ .Values.service.type }}   ports:       protocol: TCP       name: http   selector:     app.kubernetes.io/name: {{ include "measurement-collector.name" . }}     app.kubernetes.io/instance: {{ .Release.Name }} 

Then after running: helm upgrade -i measurement-collector chart/measurement-collector --namespace prod --wait

I get:

Error: UPGRADE FAILED: Deployment.apps "measurement-collector" is invalid: spec.selector: Invalid value:  v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/name":"measurement-collector", "app.kubernetes.io/instance":"measurement-collector"},          MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable 
like image 974
pixel Avatar asked Jan 01 '19 19:01

pixel


People also ask

Does helm upgrade delete resources?

Opt Out of Resource Deletion with Resource PoliciesHelm commands, such as uninstall, upgrade, or rollback would result in the deletion of the above secret. But by using the resource policy as shown, Helm will skip the deletion of the secret and allow it to be orphaned.

How do you update helm chart with new values?

To perform a helm release upgrade using the CLI, run the following command provided: helm upgrade <release name> <chart directory> -f my-values. yaml using the configuration specified in the customized values. yaml file. After a successful upgrade, the helm will return the following message.

What does helm upgrade force do?

Sometimes, though, Helm users want to make sure that the pods are restarted. That's where the --force flag comes in. Instead of modifying the Deployment (or similar object), it will delete and re-create it. This forces Kubernetes to delete the old pods and create new ones.


2 Answers

If you change the selector label, then you will need to purge the release first before deploying.

like image 145
TigerBear Avatar answered Sep 27 '22 17:09

TigerBear


Though @TigerBear answer is correct, I think I need to explain it in a little bit more details. This problem is caused by a simple reason - immutability of the selectors. You cannot update selectors for (I am not sure this is the complete list, feel free to correct me):

  1. ReplicasSets
  2. Deployments
  3. DaemonSets

In other words, for example if you have a Deployment with label 'my-app: ABC' within selectors, after that you have updated label inside selector onto this: 'my-app: XYZ', and then simply applied this changes, e.g. like this:

kubectl apply -f deployment-file-name.yml 

it will not work - you have to recreate the deployment.

Related github k8s issue, also there is a little note about it in the Deployment doc

like image 44
misha2048 Avatar answered Sep 27 '22 17:09

misha2048