Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Container in GKE can't ping compute instance on the same network

I have created a new cluster in GKE with version 1.10.5-gke.0. I see that my applications cannot reach IPs in the same network, basically instances running on compute.

I have ssh'd to one of the Kubernetes nodes, and by using the toolbox included i can ping those IP addresses, but I can't if I try from a container running on this cluster.

I saw that since 1.10 google disables access scopes for compute & storage, and even if I enable those scopes I still get the same.

I find it a bit puzzling, as this used to work for all other clusters in the past without any extra config needed

Am I missing something here?

like image 501
Apostolos Samatas Avatar asked Jul 09 '18 13:07

Apostolos Samatas


People also ask

Can two pods have same IP?

Each Pod has a single IP address assigned from the Pod CIDR range of its node. This IP address is shared by all containers running within the Pod, and connects them to other Pods running in the cluster. Each Service has an IP address, called the ClusterIP, assigned from the cluster's VPC network.

What is master authorized networks in Gke?

--create-subnetwork name=my-subnet-1 causes GKE to automatically create a subnet named my-subnet-1 . --enable-master-authorized-networks specifies that access to the public endpoint is restricted to IP address ranges that you authorize.

What condition must be set to TRUE to allow the pod to utilize the network of the host?

node_selector - (Optional) NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node.


2 Answers

An easy way of doing fixing this is using the Google Cloud Console.

Go to

Navigation Menu -> VPC network -> Firewall rules

.

Normally when a cluster is created, a number of rules are created automatically with certain prefixes and suffixes. Look in the table of rules with a gke- prefix and an -all suffix e.g. gke-[my_cluster_name]-all. You'll notice for this rule, it has the source ranges for your pods within the cluster and quite a few protocols (tcp, udp, imp, esp, etc.) allowed.

Select this rule and go to Edit. Under Targets, select the drop down and change to All instances in the network.

Alternatively, you can choose specified Specified target tags or Specified service account, inputing the correct values below, like your correct developer service account for the compute engine you're trying to reach.

You can also look here if you're Kubernetes is version 1.9.x and later for another alternative way. Troubleshooting

Hope all this helps.

like image 65
iAmcR Avatar answered Sep 21 '22 00:09

iAmcR


I also ran into this issue. I have mongo running on a VM on the default network, and couldn't reach it from inside pods after I recreated my kubernetes cluster on a new node that was also on the default network.

Adding this firewall rule fixed the issue:

NAME                                              NETWORK                DIRECTION  PRIORITY  SRC_RANGES                                                                                                                                                                                                                                   DEST_RANGES  ALLOW                         DENY  SRC_TAGS  SRC_SVC_ACCT  TARGET_TAGS                                        TARGET_SVC_ACCT
gke-seqr-cluster-dev-eb823c8e-all                 default                INGRESS    1000      10.48.0.0/24                                                                                                                                                                                                                                              tcp,udp,icmp,esp,ah,sctp

Here, the 10.48.0.0 subnet is based on the cbr0 network (looked up by ssh'ing into the kubernetes node and running ip address)

cbr0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1460 qdisc htb state UP group default qlen 1000
   ..
    inet 10.48.0.1/24 scope global cbr0
       valid_lft forever preferred_lft forever
   ..

Another way to get the 10.48.0.1 ip is to install and run traceroute inside a pod:

traceroute <ip of node you're trying to reach>
like image 27
user553965 Avatar answered Sep 18 '22 00:09

user553965