Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Kubernetes - Container image already present on machine

So I have 2 similar deployments on k8s that pulls the same image from GitLab. Apparently this resulted in my second deployment to go on a CrashLoopBackOff error and I can't seem to connect to the port to check on the /healthz of my pod. Logging the pod shows that the pod received an interrupt signal while describing the pod shows the following message.

 FirstSeen  LastSeen    Count   From            SubObjectPath                   Type        Reason          Message
  --------- --------    -----   ----            -------------                   --------    ------          -------
  29m       29m     1   default-scheduler                           Normal      Scheduled       Successfully assigned java-kafka-rest-kafka-data-2-development-5c6f7f597-5t2mr to 172.18.14.110
  29m       29m     1   kubelet, 172.18.14.110                          Normal      SuccessfulMountVolume   MountVolume.SetUp succeeded for volume "default-token-m4m55" 
  29m       29m     1   kubelet, 172.18.14.110  spec.containers{consul}             Normal      Pulled          Container image "..../consul-image:0.0.10" already present on machine
  29m       29m     1   kubelet, 172.18.14.110  spec.containers{consul}             Normal      Created         Created container
  29m       29m     1   kubelet, 172.18.14.110  spec.containers{consul}             Normal      Started         Started container
  28m       28m     1   kubelet, 172.18.14.110  spec.containers{java-kafka-rest-development}    Normal      Killing         Killing container with id docker://java-kafka-rest-development:Container failed liveness probe.. Container will be killed and recreated.
  29m       28m     2   kubelet, 172.18.14.110  spec.containers{java-kafka-rest-development}    Normal      Created         Created container
  29m       28m     2   kubelet, 172.18.14.110  spec.containers{java-kafka-rest-development}    Normal      Started         Started container
  29m       27m     10  kubelet, 172.18.14.110  spec.containers{java-kafka-rest-development}    Warning     Unhealthy       Readiness probe failed: Get http://10.5.59.35:7533/healthz: dial tcp 10.5.59.35:7533: getsockopt: connection refused
  28m       24m     13  kubelet, 172.18.14.110  spec.containers{java-kafka-rest-development}    Warning     Unhealthy       Liveness probe failed: Get http://10.5.59.35:7533/healthz: dial tcp 10.5.59.35:7533: getsockopt: connection refused
  29m       19m     8   kubelet, 172.18.14.110  spec.containers{java-kafka-rest-development}    Normal      Pulled          Container image "r..../java-kafka-rest:0.3.2-dev" already present on machine
  24m       4m      73  kubelet, 172.18.14.110  spec.containers{java-kafka-rest-development}    Warning     BackOff         Back-off restarting failed container

I have tried to redeploy the deployments under different images and it seems to work just fine. However I don't think this will be efficient as the images are the same throughout. How do I go on about this?

Here's what my deployment file looks like:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: "java-kafka-rest-kafka-data-2-development"
  labels:
    repository: "java-kafka-rest"
    project: "java-kafka-rest"
    service: "java-kafka-rest-kafka-data-2"
    env: "development"
spec:
  replicas: 1
  selector:
    matchLabels:
      repository: "java-kafka-rest"
      project: "java-kafka-rest"
      service: "java-kafka-rest-kafka-data-2"
      env: "development"
  template:
    metadata:
      labels:
        repository: "java-kafka-rest"
        project: "java-kafka-rest"
        service: "java-kafka-rest-kafka-data-2"
        env: "development"
        release: "0.3.2-dev"
    spec:
      imagePullSecrets:
      - name: ...
      containers:
      - name: java-kafka-rest-development
        image: registry...../java-kafka-rest:0.3.2-dev
        env:
        - name: DEPLOYMENT_COMMIT_HASH
          value: "0.3.2-dev"
        - name: DEPLOYMENT_PORT
          value: "7533"
        livenessProbe:
          httpGet:
            path: /healthz
            port: 7533
          initialDelaySeconds: 30
          timeoutSeconds: 1
        readinessProbe:
          httpGet:
            path: /healthz
            port: 7533
          timeoutSeconds: 1
        ports:
        - containerPort: 7533
        resources:
          requests:
            cpu: 0.5
            memory: 6Gi
          limits:
            cpu: 3
            memory: 10Gi
        command:
          - /envconsul
          - -consul=127.0.0.1:8500
          - -sanitize
          - -upcase
          - -prefix=java-kafka-rest/
          - -prefix=java-kafka-rest/kafka-data-2
          - java
          - -jar
          - /build/libs/java-kafka-rest-0.3.2-dev.jar
        securityContext:
          readOnlyRootFilesystem: true
      - name: consul
        image: registry.../consul-image:0.0.10
        env:
        - name: SERVICE_NAME
          value: java-kafka-rest-kafka-data-2
        - name: SERVICE_ENVIRONMENT
          value: development
        - name: SERVICE_PORT
          value: "7533"
        - name: CONSUL1
          valueFrom:
            configMapKeyRef:
              name: consul-config-...
              key: node1
        - name: CONSUL2
          valueFrom:
            configMapKeyRef:
              name: consul-config-...
              key: node2
        - name: CONSUL3
          valueFrom:
            configMapKeyRef:
              name: consul-config-...
              key: node3
        - name: CONSUL_ENCRYPT
          valueFrom:
            configMapKeyRef:
              name: consul-config-...
              key: encrypt
        ports:
        - containerPort: 8300
        - containerPort: 8301
        - containerPort: 8302
        - containerPort: 8400
        - containerPort: 8500
        - containerPort: 8600
        command: [ entrypoint, agent, -config-dir=/config, -join=$(CONSUL1), -join=$(CONSUL2), -join=$(CONSUL3), -encrypt=$(CONSUL_ENCRYPT) ]
      terminationGracePeriodSeconds: 30
      nodeSelector:
        env: ...
like image 555
AlphaCR Avatar asked Nov 29 '18 09:11

AlphaCR


People also ask

What is image pull back off?

The ImagePullBackOff error occurs when the image path is incorrect, the network fails, or the kubelet does not succeed in authenticating with the container registry. Kubernetes initially throws the ErrImagePull error, and then after retrying a few times, “pulls back” and schedules another download attempt.

Where does k8s pull images from?

During the deployment of an application to a Kubernetes cluster, you'll typically want one or more images to be pulled from a Docker registry. In the application's manifest file you specify the images to pull, the registry to pull them from, and the credentials to use when pulling the images.

Does Kubernetes auto pull latest image?

If the image is tagged latest, then Kubernetes will assume the imagePullPolicyto be Always. An image with no tag is assumed to be latest, and so its policy is set to Always. Otherwise, the orchestrator will default the imagePullPolicy to IfNotPresent.


1 Answers

To those having this problem, I've discovered the problem and solution to my question. Apparently the problem lies with my service.yml where my targetPort was aimed to a port different than the one I opened in my docker image. Make sure the port that's opened in the docker image connects to the right port.

Hope this helps.

like image 121
AlphaCR Avatar answered Oct 08 '22 02:10

AlphaCR