Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Kubernetes not waiting for reactjs to load

I have a GKE cluster. I am trying to deploy a reactjs frontend app but it seems like kubernetes is restarting the pod before it can totally load. I can run the container manually with docker and the app loads successfully but it takes a long time to load (10 minutes) I think because I am using the most basic servers in GCP.
I am trying to use probes for kubernetes to wait until app is app and running. I can not make it work. Is there any other way to tell kubernetes to wait for app startup? Thank you

this is my deploy file:

kind: Deployment
metadata:
  labels:
    app: livenessprobe
  name: livenessprobe
spec:
  replicas: 1
  selector:
    matchLabels:
      app: livenessprobe
  template:
    metadata:
      labels:
        app: livenessprobe
    spec:
      containers:
      - image: mychattanooga:v1
        name: mychattanooga
        livenessProbe:
          httpGet:
            path: /healthz
            port: 3000
          initialDelaySeconds: 99
          periodSeconds: 30
        resources: {}

The pod restart every 5 seconds or so and then I get crashLoopBackOff and restarts again .....

kubectl get events:

assigned default/mychattanooga-85f44599df-t6tnr to gke-cluster-2-default-pool-054176ff-wsp6
13m         Normal    Pulled                         pod/mychattanooga-85f44599df-t6tnr               Container im
age "#####/mychattanooga@sha256:03dd2d6ef44add5c9165410874cee9155af645f88896e5d5cafb883265c
3d4c9" already present on machine
13m         Normal    Created                        pod/mychattanooga-85f44599df-t6tnr               Created cont
ainer mychattanooga-sha256-1
13m         Normal    Started                        pod/mychattanooga-85f44599df-t6tnr               Started cont
ainer mychattanooga-sha256-1
13m         Warning   BackOff                        pod/mychattanooga-85f44599df-t6tnr               Back-off res
tarting failed container

kubectl describe pod:

 Name:           livenessprobe-5f9b566f76-dqk5s
Namespace:      default
Priority:       0
Node:           gke-cluster-2-default-pool-054176ff-wsp6/10.142.0.2
Start Time:     Wed, 01 Jul 2020 04:01:22 -0400
Labels:         app=livenessprobe
                pod-template-hash=5f9b566f76
Annotations:    kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container mychattanooga
Status:         Running
IP:             10.36.0.58
IPs:            <none>
Controlled By:  ReplicaSet/livenessprobe-5f9b566f76
Containers:
  mychattanooga:
    Container ID:   docker://cf33dafd0bb21fa7ddc86d96f7a0445d6d991e3c9f0327195db355f1b3aca526
    Image:          #####/mychattanooga:v1
    Image ID:       docker-pullable://gcr.io/operational-datastore/mychattanooga@sha256:03dd2d6ef44add5c9165410874
cee9155af645f88896e5d5cafb883265c3d4c9
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Wed, 01 Jul 2020 04:04:35 -0400
      Finished:     Wed, 01 Jul 2020 04:04:38 -0400
    Ready:          False
    Restart Count:  5
    Requests:
      cpu:        100m
    Liveness:     http-get http://:3000/healthz delay=999s timeout=1s period=300s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-zvncw (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  default-token-zvncw:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-zvncw
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                     From                                               Message
  ----     ------     ----                    ----                                               -------
  Normal   Scheduled  4m46s                   default-scheduler                                  Successfully assi
gned default/livenessprobe-5f9b566f76-dqk5s to gke-cluster-2-default-pool-054176ff-wsp6
  Normal   Pulled     3m10s (x5 over 4m45s)   kubelet, gke-cluster-2-default-pool-054176ff-wsp6  Container image "
#######/mychattanooga:v1" already present on machine
  Normal   Created    3m10s (x5 over 4m45s)   kubelet, gke-cluster-2-default-pool-054176ff-wsp6  Created container
 mychattanooga
  Normal   Started    3m10s (x5 over 4m45s)   kubelet, gke-cluster-2-default-pool-054176ff-wsp6  Started container
 mychattanooga
  Warning  BackOff    2m43s (x10 over 4m38s)  kubelet, gke-cluster-2-default-pool-054176ff-wsp6  Back-off restarti
ng failed container

this is my Dcokerfile:

FROM node:latest

# Copy source code
COPY source/ /opt/app

# Change working directory
WORKDIR /opt/app

# install stuff
RUN npm install

# Expose API port to the outside
EXPOSE 3000

# Launch application
CMD ["npm", "start"]
like image 936
Roberto Rios Avatar asked Feb 13 '26 12:02

Roberto Rios


1 Answers

From the docs here you can protect slow starting containers with startup probes.

Sometimes, you have to deal with legacy applications that might require an additional startup time on their first initialization. In such cases, it can be tricky to set up liveness probe parameters without compromising the fast response to deadlocks that motivated such a probe. The trick is to set up a startup probe with the same command, HTTP or TCP check, with a failureThreshold * periodSeconds long enough to cover the worse case startup time

startupProbe:
  httpGet:
    path: /healthz
    port: liveness-port
  failureThreshold: 30
  periodSeconds: 10
like image 178
Arghya Sadhu Avatar answered Feb 16 '26 00:02

Arghya Sadhu



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!