I have set-up a small Kubernetes Cluster on Raspberry Pi currently consisting of 1 Master and 1 Worker. I have created a simple deployment of NGINX and created a NodePort service for this. My YAML looks like this:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
labels:
app: nginx
spec:
selector:
app: nginx
type: NodePort
ports:
- nodePort: 30333
port: 80
targetPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.16.1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
protocol: TCP
restartPolicy: Always
The PODS are up and running and so is the service
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-5d66cc795f-bgqdp 1/1 Running 0 65m
nginx-5d66cc795f-mb8qw 1/1 Running 0 65m
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 13d
nginx-service NodePort 10.104.133.199 <none> 80:30333/TCP 66m
But I am unable to access the NGINX from the master node.
curl http://192.168.178.101:30333
curl: (7) Failed to connect to 192.168.178.101 port 30333: Connection timed out
If I try from the Worker node it works fine, and NGINX responds. From the worker node I can use the IP address:30333 or the hostname:30333, although localhost:30333 does not work!
Connectivity from my Master to Worker seems fine. I can ping, SSH from there etc. either on IP address or hostname.
Any ideas what I have done wrong?
Output from get nodes -o wide:
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8-master Ready master 13d v1.17.4 192.168.178.100 <none> Raspbian GNU/Linux 10 (buster) 4.19.97-v7+ docker://19.3.8
k8-w1 Ready worker 13d v1.17.4 192.168.178.101 <none> Raspbian GNU/Linux 10 (buster) 4.19.97-v7+ docker://19.3.8
Output from describe service:
$ kubectl describe service nginx-service
Name: nginx-service
Namespace: default
Labels: app=nginx
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"nginx"},"name":"nginx-service","namespace":"default"},"s...
Selector: app=nginx
Type: NodePort
IP: 10.104.133.199
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 30333/TCP
Endpoints: 10.44.0.1:80,10.44.0.2:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Update: I tried a simple Telnet command to the Worker on the NodePort 30333. But I get a connection timed out error.
So then, I removed the NodePort service and tried a simple Port forward command
kubectl port-forward pod/nginx-545b8fdd47-brg7r 8080:80
This worked OK, and I could connect from Master -> Worker via this port.
But NodePort service still doesn't work!
If you want to access it from outside the cluster an ingress controller or to connect on http://<node-ip>: 30960 is the only way to connect to the jenkins-ui k8s service and thus the pod behind it. This way, http://localhost:9090 will show you the jenkins-ui screen because you have kubectl access.
Declaring a service as NodePort exposes the Service on each Node's IP at the NodePort (a fixed port for that Service , in the default range of 30000-32767). You can then access the Service from outside the cluster by requesting <NodeIp>:<NodePort> .
So after many hours and days I think I have found the source of the problem.
I found this blog: https://limpygnome.com/2019/09/21/raspberry-pi-kubernetes-cluster/
Which led me to this bug report: https://github.com/kubernetes-sigs/kubespray/issues/4674
So executing the following would allow the connection:
sudo iptables -P FORWARD ACCEPT
But I could not get this to stick after a reboot (even using iptables-persistent), I assume as the rules are updated by Docker/K8s dynamically during startup and after changes.
This led me to investigate further and I found information in the Weave network documentation about issues regarding K8s network with hosts that use iptables v 1.8 and higher. (This seems to have also applied to other K8s networking providers although some of these might have been resolved). I then saw in my Weave log files that it was indeed rejecting requests.
So by executing the following on my nodes I was able to get this working permanently.
sudo update-alternatives --set iptables /usr/sbin/iptables-legacy
I can now access via my NodePort between all nodes, and externally from the cluster onto all nodes.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With