Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

GKE: secured access to services from outside the cluster

Is there any way to access the 'internal' services (those not exposed outside) of the cluster in a secure way from the outside.

The goal is simple: I need to debug clients of those services and need to access them, but don't want to expose them outside.

On a regular single host I would normally tunnel to the host with SSH and map the ports to localhost; I tried using a SSHD container but that didn't get me very far: the services are not directly on that container so I'm not sure how to get to the next hop on the network since the services are dynamically managing IPs.

Ideally a VPN would be much more convenient, but GKE doesn't seem to support VPN for road warrior situation.

Is there any solution for this use-case?

Thanks for your input.

EDIT:

I see here: https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/accessing-the-cluster.md#ways-to-connect

that the only way to connect supported right now is HTTP/HTTPS meaning I can proxy HTTP calls but not to any port

like image 411
MrE Avatar asked Oct 04 '15 00:10

MrE


People also ask

How do I access Kubernetes cluster from outside?

Ways to connect You have several options for connecting to nodes, pods and services from outside the cluster: Access services through public IPs. Use a service with type NodePort or LoadBalancer to make the service reachable outside the cluster. See the services and kubectl expose documentation.

Can you access the pod from outside of the cluster using NodePort?

Open the web browser from the local machine and access type the Kubernetes node IP along with the Node port. Nginxwebpage. We are able to access the Nginx homepage successfully. It's a containerized web server listening in the port 80 and we mapped it to 30080 in every node in the cluster.


2 Answers

You can do this with a combination of running kubectl proxy on your dev machine and using the proxying functionality built into the master (that's a lot of proxying, but bear with me).

First, run kubectl proxy. Note the port that is bound locally (it should be 8001 by default). This will cause kubectl to create a tunnel to your master instance that you can hit locally without needing to pass any authentication (technically, you can do all of the following steps without doing this first by hitting the master directly, but this is simpler for debugging).

Next, point a client (web browser, curl, etc) at http://localhost:8001/api/v1/proxy/namespaces/<ns>/services/<svc>/, replacing <ns> with the namespace in which your service is configured and <svc> with the name of your service. You can also append a particular request path to the end of the URL, so if your pods behind the service are hosting a file called data.json you would append that to the end of the request path.

This is how the update-demo tutorial works, so if you get stuck I'd recommend walking through that example and taking a close look at what the javascript does (it isn't too complicated).

like image 73
Robert Bailey Avatar answered Sep 22 '22 04:09

Robert Bailey


After trying the many methods explained in the doc mentioned above, the thing that works for me was:

1) Create a SSHD daemon container to SSH to the cluster 2) Create a ssh Service with a type: NodePort

3) get the port number with kubectl describe service sshd

4) use ssh port forwarding to get to the service with:

ssh -L <local-port>:<my-k8s-service-name>:<my-k8s-service-port> -p <sshd-port> user@sshd-container

for example

ssh -L 2181:zookeeper:2181 -p 12345 root@sshd-container

Then I have my zookeeper service on localhost:2181 For more port mappings, use alternate ports.

like image 25
MrE Avatar answered Sep 25 '22 04:09

MrE