I tried to set up my own container on GKE using gcr.io and keep getting ImagePullBackOff failure.
Thinking I was doing something wrong, I went back to the tutorial here https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app and followed all the steps and get the same error. It looks like a credential problem but I follow all the steps of the tutorial and still no luck.
How do I debug this error as the logs don't seem to help.
steps 1-4 of the tutorial work.
kubectl run hello-web --image=gcr.io/${PROJECT_ID}/hello-app:v1 --port 8080
fails with ImagePullBackOff I thought GKE and gcr.io handle credentials automatically. What am I doing wrong? How do I debug this?
kubectl describe pods hello-web-6444d588b7-tqgdm
Name: hello-web-6444d588b7-tqgdm
Namespace: default
Node: gke-aia-default-pool-9ad6a2ee-j5g7/10.152.0.2
Start Time: Sat, 27 Oct 2018 06:51:38 +1000
Labels: pod-template-hash=2000814463
run=hello-web
Annotations: kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for container hello-web
Status: Pending
IP: 10.12.2.5
Controlled By: ReplicaSet/hello-web-6444d588b7
Containers:
hello-web:
Container ID:
Image: gcr.io/<project-id>/hello-app:v1
Image ID:
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Requests:
cpu: 100m
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-qgv8h (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-qgv8h:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-qgv8h
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 45m default-scheduler Successfully assigned hello-web-6444d588b7-tqgdm to gke-aia-default-pool-9ad6a2ee-j5g7
Normal SuccessfulMountVolume 45m kubelet, gke-aia-default-pool-9ad6a2ee-j5g7 MountVolume.SetUp succeeded for volume "default-token-qgv8h"
Normal Pulling 44m (x4 over 45m) kubelet, gke-aia-default-pool-9ad6a2ee-j5g7 pulling image "gcr.io/<project-id>/hello-app:v1"
Warning Failed 44m (x4 over 45m) kubelet, gke-aia-default-pool-9ad6a2ee-j5g7 Failed to pull image "gcr.io/<project-id>/hello-app:v1": rpc error: code = Unknown desc = Error response from daemon: repository gcr.io/<project-id>/hello-app not found: does not exist or no pull access
Warning Failed 44m (x4 over 45m) kubelet, gke-aia-default-pool-9ad6a2ee-j5g7 Error: ErrImagePull
Normal BackOff 5m (x168 over 45m) kubelet, gke-aia-default-pool-9ad6a2ee-j5g7 Back-off pulling image "gcr.io/<project-id>/hello-app:v1"
Warning Failed 48s (x189 over 45m) kubelet, gke-aia-default-pool-9ad6a2ee-j5g7 Error: ImagePullBackOff
cluster permissions:
User info Disabled
Compute Engine Read/Write
Storage Read Only
Task queue Disabled
BigQuery Disabled
Cloud SQL Disabled
Cloud Datastore Disabled
Stackdriver Logging API Write Only
Stackdriver Monitoring API Full
Cloud Platform Disabled
Bigtable Data Disabled
Bigtable Admin Disabled
Cloud Pub/Sub Disabled
Service Control Enabled
Service Management Read Only
Stackdriver Trace Write Only
Cloud Source Repositories Disabled
Cloud Debugger Disabled
To resolve it, double check the pod specification and ensure that the repository and image are specified correctly. If this still doesn't work, there may be a network issue preventing access to the container registry. Look in the describe pod text file to obtain the hostname of the Kubernetes node.
So what exactly does ImagePullBackOff mean? The status ImagePullBackOff means that a Pod couldn't start, because Kubernetes couldn't pull a container image. The 'BackOff' part means that Kubernetes will keep trying to pull the image, with an increasing delay ('back-off').
The ImagePull part of the ImagePullBackOff error primarily relates to your Kubernetes container runtime being unable to pull the image from a private or public container registry. The Backoff part indicates that Kubernetes will continuously pull the image with an increasing backoff delay.
After reading some of the docs, I manually added access using these instructions: https://cloud.google.com/container-registry/docs/access-control
and that now allows the sample code to deploy. Looks like the automatic access from gke to gcr didn't work.
When creating your GKE cluster, make sure to have Storage RO or https://www.googleapis.com/auth/devstorage.read_only
scope for your nodes.
I tripped over this when creating GKE cluster via Terraform and had:
node_config {
oauth_scopes = [
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
]
...
instead of
node_config {
oauth_scopes = [
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
"https://www.googleapis.com/auth/devstorage.read_only"
]
...
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With