My deployment is using a couple of volumes, all defined as ReadWriteOnce
.
When applying the deployment to a clean cluster, pod is created successfuly.
However, if I update my deployment (i.e update container image), when a new pod is created for my deployment it will always fail on volume mount:
/Mugen$ kubectl get pods
NAME READY STATUS RESTARTS AGE
my-app-556c8d646b-4s2kg 5/5 Running 1 2d
my-app-6dbbd99cc4-h442r 0/5 ContainerCreating 0 39m
/Mugen$ kubectl describe pod my-app-6dbbd99cc4-h442r
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9m default-scheduler Successfully assigned my-app-6dbbd99cc4-h442r to gke-my-test-default-pool-671c9db5-k71l
Warning FailedAttachVolume 9m attachdetach-controller Multi-Attach error for volume "pvc-b57e8a7f-1ca9-11e9-ae03-42010a8400a8" Volume is already used by pod(s) my-app-556c8d646b-4s2kg
Normal SuccessfulMountVolume 9m kubelet, gke-my-test-default-pool-671c9db5-k71l MountVolume.SetUp succeeded for volume "default-token-ksrbf"
Normal SuccessfulAttachVolume 9m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-2cc1955a-1cb2-11e9-ae03-42010a8400a8"
Normal SuccessfulAttachVolume 9m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-2c8dae3e-1cb2-11e9-ae03-42010a8400a8"
Normal SuccessfulMountVolume 9m kubelet, gke-my-test-default-pool-671c9db5-k71l MountVolume.SetUp succeeded for volume "pvc-2cc1955a-1cb2-11e9-ae03-42010a8400a8"
Normal SuccessfulMountVolume 9m kubelet, gke-my-test-default-pool-671c9db5-k71l MountVolume.SetUp succeeded for volume "pvc-2c8dae3e-1cb2-11e9-ae03-42010a8400a8"
Warning FailedMount 52s (x4 over 7m) kubelet, gke-my-test-default-pool-671c9db5-k71l Unable to mount volumes for pod "my-app-6dbbd99cc4-h442r_default(affe75e0-1edd-11e9-bb45-42010a840094)": timeout expired waiting for volumes to attach or mount for pod "default"/"my-app-6dbbd99cc4-h442r". list of unmounted volumes=[...]. list of unattached volumes=[...]
What is the best strategy to apply changes to such a deployment then? Will there have to be some service outage in order to use the same persistence volumes? (I wouldn't want to create new volumes - the data should maintain)
You will need to tolerate an outage here, due to the access mode. This will delete the existing Pods (unmounting the volumes) before creating new ones.
A Deployment strategy - .spec.strategy.type
- of “Recreate” will help achieve this: https://github.com/ContainerSolutions/k8s-deployment-strategies/blob/master/recreate/README.md
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With