I have a Kubernetes service on GKE as follows:
$ kubectl describe service staging
Name: staging
Namespace: default
Labels: <none>
Selector: app=jupiter
Type: NodePort
IP: 10.11.246.27
Port: <unnamed> 80/TCP
NodePort: <unnamed> 31683/TCP
Endpoints: 10.8.0.33:1337
Session Affinity: None
No events.
I can access the service from a VM directly via one of its endpoints (10.8.0.21:1337
) or via the node port (10.240.251.174:31683
in my case). However, if I try to access 10.11.246.27:80
, I get nothing. I've also tried ports 1337 and 31683.
Why can't I access the service via its IP? Do I need a firewall rule or something?
Ways to connect You have several options for connecting to nodes, pods and services from outside the cluster: Access services through public IPs. Use a service with type NodePort or LoadBalancer to make the service reachable outside the cluster. See the services and kubectl expose documentation.
Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods, and can load-balance across them.
To access a ClusterIP service for debugging purposes, you can run kubectl port-forward . You will not actually access the service, but you will directly connect to one of the pods.
Service IPs are virtual IPs managed by kube-proxy. So, in order for that IP to be meaningful, the client must also be a part of the kube-proxy "overlay" network (have kube-proxy running, pointing at the same apiserver).
Pod IPs on GCE/GKE are managed by GCE Routes, which is more like an "underlay" of all VMs in the network.
There are a couple of ways to access non-public services from outside the cluster. Here they are in more detail, but in short:
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With