I'm using kubectl apply
to update my Kubernetes pods:
kubectl apply -f /my-app/service.yaml
kubectl apply -f /my-app/deployment.yaml
Below is my service.yaml:
apiVersion: v1
kind: Service
metadata:
name: my-app
labels:
run: my-app
spec:
type: NodePort
selector:
run: my-app
ports:
- protocol: TCP
port: 9000
nodePort: 30769
Below is my deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
selector:
matchLabels:
run: my-app
replicas: 2
template:
metadata:
labels:
run: my-app
spec:
containers:
- name: my-app
image: dockerhubaccount/my-app-img:latest
ports:
- containerPort: 9000
protocol: TCP
imagePullSecrets:
- name: my-app-img-credentials
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 25%
This works fine the first time, but on subsequent runs, my pods are not getting updated.
I have read the suggested workaround at https://github.com/kubernetes/kubernetes/issues/33664 which is:
kubectl patch deployment my-app -p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"
I was able to run the above command, but it did not resolve the issue for me.
I know that I can trigger pod updates by manually changing the image tag from "latest" to another tag, but I want to make sure I get the latest image without having to check Docker Hub.
Any help would be greatly appreciated.
Then, you can use kubectl apply to push your configuration changes to the cluster. This command will compare the version of the configuration that you're pushing with the previous version and apply the changes you've made, without overwriting any automated changes to properties you haven't specified.
The command set kubectl apply is used at a terminal's command-line window to create or modify Kubernetes resources defined in a manifest file. This is called a declarative usage. The state of the resource is declared in the manifest file, then kubectl apply is used to implement that state.
Because of this, there is no way to restart a pod, instead, it should be replaced. There is no kubectl restart [podname] command for use with K8S (with Docker you can use docker restart [container_id] ), so there are a few different ways to achieve a pod 'restart' with kubectl .
You can use kubectl set to make changes to an object's image , resources (compute resource such as CPU and memory), or selector fields. The kubectl set image command updates the nginx image of the Deployment's Pods one at a time. You can use kubectl apply to update a resource by applying a new or updated configuration.
If nothing changes in the deployment spec, the pods will not be updated for you. This is one of many reasons it is not recommended to use :latest
, as the other answer went into more detail on. The Deployment
controller is very simple and pretty much just does DeepEquals(old.Spec.Template, new.Spec.Template)
, so you need some actual change, such as you have with the PATCH call by setting a label with the current datetime.
You're missing an imagePullPolicy
in your deployment. Try this:
containers:
- name: my-app
image: dockerhubaccount/my-app-img:latest
imagePullPolicy: Always
The default policy is ifNotPresent
which is why yours is not updating.
I will incorporate two notes from the link:
Note: You should avoid using the
:latest
tag when deploying containers in production as it is harder to track which version of the image is running and more difficult to roll back properly
Note: The caching semantics of the underlying image provider make even
imagePullPolicy: Always
efficient. With Docker, for example, if the image already exists, the pull attempt is fast because all image layers are cached and no image download is needed
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With