I have a running GCP Kubernetes cluster. I managed to deploy some services and exposed them successfully using kubectl expose ... type="LoadBalancer"... However, one particular new service is not working. I know there may be a thousand causes to check, but the Docker images I build are very compact so I can't find useful tools to run via kubectl exec inside a pod or container.
Question: what might be my diagnosis options using any possible cluster tool only? What kind of logs can I inspect or which environmental variables can I read?
UPDATED:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
helianto-mailer-1024769093-6407d 2/2 Running 0 6d
helianto-spring-2246525676-l54p9 2/2 Running 0 6d
iservport-shipfo-12873703-wrh37 2/2 Running 0 13h
$ kubectl describe pod iservport-shipfo-12873703-wrh37
Name: iservport-shipfo-12873703-wrh37
Namespace: default
Node: gke-iservport01-default-pool-xxx/xx.xx.xx.xx
Start Time: Tue, 14 Mar 2017 17:28:18 -0300
Labels: app=SHIPFO
pod-template-hash=12873703
Status: Running
IP: yy.yy.yy.yy
Controllers: ReplicaSet/iservport-shipfo-12873703
Containers:
iservport-shipfo:
Container ID: docker://...
Image: us.gcr.io/mvps-156214/iservport-xxx
Image ID: docker://...
Port: 8085/TCP
Requests:
cpu: 100m
State: Running
Started: Tue, 14 Mar 2017 17:28:33 -0300
Ready: True
Restart Count: 0
Volume Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-mmeza (ro)
Environment Variables:
SPRING_PROFILES_ACTIVE: gcp
HELIANTO_MAILER_URL: http://10.35.254.197:8082
cloudsql-proxy:
Container ID: docker://...
Image: b.gcr.io/cloudsql-docker/gce-proxy:1.05
Image ID: docker://...
Port:
Command:
/cloud_sql_proxy
--dir=/cloudsql
-instances=mvps-156214:us-east1-b:helianto01=tcp:3306
-credential_file=/secrets/cloudsql/credentials.json
Requests:
cpu: 100m
State: Running
Started: Tue, 14 Mar 2017 17:28:33 -0300
Ready: True
Restart Count: 0
Volume Mounts:
/cloudsql from cloudsql (rw)
/etc/ssl/certs from ssl-certs (rw)
/secrets/cloudsql from cloudsql-oauth-credentials (ro)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-mmeza (ro)
Environment Variables: <none>
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
cloudsql-oauth-credentials:
Type: Secret (a volume populated by a Secret)
SecretName: cloudsql-oauth-credentials
ssl-certs:
Type: HostPath (bare host directory volume)
Path: /etc/ssl/certs
cloudsql:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
default-token-mmeza:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-mmeza
QoS Class: Burstable
Tolerations: <none>
No events.
$ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
helianto-mailer-service 10.35.254.197 <nodes> 443:32178/TCP,80:30771/TCP 12d
helianto-spring 10.35.241.27 xxx.xxx.xxx.xxx 80:30974/TCP 52d
iservport-shipfo 10.35.240.129 xx.xxx.xxx.xxx 80:32598/TCP 14h
kubernetes 10.35.240.1 <none> 443/TCP 53d
$ kubectl describe svc iservport-shipfo
Name: iservport-shipfo
Namespace: default
Labels: app=SHIPFO
Selector: app=SHIPFO
Type: LoadBalancer
IP: 10.35.240.129
LoadBalancer Ingress: xx.xxx.xxx.xxx
Port: <unset> 80/TCP
NodePort: <unset> 32598/TCP
Endpoints: 10.32.4.26:8085
Session Affinity: None
No events.
This is a common issue when you run the kubectl command or a similar command. In most cases, Kubernetes does not have the correct credentials to access the cluster. It can be easily resolved easily by setting an environment variable in most similar cases.
1. Install nmap " sudo apt-get install nmap " 2. listen to port 6443 "nc -l 6443" 3. open a another terminal/window and connect to 6443 port "nc -zv 192.168.
To find the cluster IP address of a Kubernetes pod, use the kubectl get pod command on your local machine, with the option -o wide . This option will list more information, including the node the pod resides on, and the pod's cluster IP. The IP column will contain the internal cluster IP address for each pod.
You need make sure if your service is responding in http port. Maybe you could do a port-forward from your pod to your desktop local. Please, replace the values pod_name, pod_port and local_port in command bellow.
kubectl port-forward <pod_name> <local_port>:<pod_port>
After this, access http://localhost:local_port and verify if return something. This way, you can make sure if your application is responding.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With