I'm trying to install a kubernetes cluster on my server (Debian 10). On my server I used ufw as firewall. Before creating the cluster I allowed these ports on ufw:
179/tcp, 4789/udp, 5473/tcp, 443 /tcp, 6443/tcp, 2379/tcp, 4149/tcp, 10250/tcp, 10255/tcp, 10256/tcp, 9099/tcp, 6443/tcp
As calico doc suggests (https://docs.projectcalico.org/getting-started/kubernetes/requirements) and this git repo on kubernetes security too (https://github.com/freach/kubernetes-security-best-practice).
But when I want to create the cluster, the calico/node pod can't start because Felix is not live (I allowed 9099/tcp on ufw):
Liveness probe failed: calico/node is not ready: Felix is not live: Get http://localhost:9099/liveness: dial tcp [::1]:9099: connect: connection refused
If I disable ufw, the cluster is created and there is no error.
So I would like to know how I should configure ufw in order for kubernetes to work. If anyone could help me, it would be very great, thanks !
Edit: My ufw status
To Action From
6443/tcp ALLOW Anywhere
9099 ALLOW Anywhere
179/tcp ALLOW Anywhere
4789/udp ALLOW Anywhere
5473/tcp ALLOW Anywhere
2379/tcp ALLOW Anywhere
8181 ALLOW Anywhere
8080 ALLOW Anywhere
###### (v6) LIMIT Anywhere (v6) # allow ssh connections in
Postfix (v6) ALLOW Anywhere (v6)
KUBE (v6) ALLOW Anywhere (v6)
6443 (v6) ALLOW Anywhere (v6)
6783/udp (v6) ALLOW Anywhere (v6)
6784/udp (v6) ALLOW Anywhere (v6)
6783/tcp (v6) ALLOW Anywhere (v6)
443/tcp (v6) ALLOW Anywhere (v6)
80/tcp (v6) ALLOW Anywhere (v6)
4149/tcp (v6) ALLOW Anywhere (v6)
10250/tcp (v6) ALLOW Anywhere (v6)
10255/tcp (v6) ALLOW Anywhere (v6)
10256/tcp (v6) ALLOW Anywhere (v6)
9099/tcp (v6) ALLOW Anywhere (v6)
6443/tcp (v6) ALLOW Anywhere (v6)
9099 (v6) ALLOW Anywhere (v6)
179/tcp (v6) ALLOW Anywhere (v6)
4789/udp (v6) ALLOW Anywhere (v6)
5473/tcp (v6) ALLOW Anywhere (v6)
2379/tcp (v6) ALLOW Anywhere (v6)
8181 (v6) ALLOW Anywhere (v6)
8080 (v6) ALLOW Anywhere (v6)
53 ALLOW OUT Anywhere # allow DNS calls out
123 ALLOW OUT Anywhere # allow NTP out
80/tcp ALLOW OUT Anywhere # allow HTTP traffic out
443/tcp ALLOW OUT Anywhere # allow HTTPS traffic out
21/tcp ALLOW OUT Anywhere # allow FTP traffic out
43/tcp ALLOW OUT Anywhere # allow whois
SMTPTLS ALLOW OUT Anywhere # open TLS port 465 for use with SMPT to send e-mails
10.32.0.0/12 ALLOW OUT Anywhere on weave
53 (v6) ALLOW OUT Anywhere (v6) # allow DNS calls out
123 (v6) ALLOW OUT Anywhere (v6) # allow NTP out
80/tcp (v6) ALLOW OUT Anywhere (v6) # allow HTTP traffic out
443/tcp (v6) ALLOW OUT Anywhere (v6) # allow HTTPS traffic out
21/tcp (v6) ALLOW OUT Anywhere (v6) # allow FTP traffic out
43/tcp (v6) ALLOW OUT Anywhere (v6) # allow whois
SMTPTLS (v6) ALLOW OUT Anywhere (v6) # open TLS port 465 for use with SMPT to send e-mails
Sorry my ufw rules are a bit messy, I tried too many things to get kubernetes working.
Conclusion. It is good to have security at all levels in k8s cluster. Adding a firewall for the application pods in the kubernetes cluster increases level of security by reducing the attack surface and hence is highly recommended.
By default, when enabled UFW will block external access to all ports on a server. In practice, that means if you are connected to a server via SSH and enable ufw before allowing access via the SSH port, you'll be disconnected.
1. Install nmap " sudo apt-get install nmap " 2. listen to port 6443 "nc -l 6443" 3. open a another terminal/window and connect to 6443 port "nc -zv 192.168.
I'm trying to install a kubernetes cluster on my server (Debian 10). On my server I used ufw as firewall. Before creating the cluster I allowed these ports on ufw: 179/tcp, 4789/udp, 5473/tcp, 443 /tcp, 6443/tcp, 2379/tcp, 4149/tcp, 10250/tcp, 10255/tcp, 10256/tcp, 9099/tcp, 6443/tcp
NOTE: all executable commands begin with $
$ sudo apt update && sudo apt-upgrade -y
$ sudo apt install ufw -y
$ sudo ufw allow ssh
Rule added
Rule added (v6)
$ sudo ufw enable
Command may disrupt existing ssh connections. Proceed with operation (y|n)? y
Firewall is active and enabled on system startup
$ sudo ufw allow 179/tcp
$ sudo ufw allow 4789/tcp
$ sudo ufw allow 5473/tcp
$ sudo ufw allow 443/tcp
$ sudo ufw allow 6443/tcp
$ sudo ufw allow 2379/tcp
$ sudo ufw allow 4149/tcp
$ sudo ufw allow 10250/tcp
$ sudo ufw allow 10255/tcp
$ sudo ufw allow 10256/tcp
$ sudo ufw allow 9099/tcp
$ sudo ufw status
Status: active
To Action From
-- ------ ----
22/tcp ALLOW Anywhere
179/tcp ALLOW Anywhere
4789/tcp ALLOW Anywhere
5473/tcp ALLOW Anywhere
443/tcp ALLOW Anywhere
6443/tcp ALLOW Anywhere
2379/tcp ALLOW Anywhere
4149/tcp ALLOW Anywhere
10250/tcp ALLOW Anywhere
10255/tcp ALLOW Anywhere
10256/tcp ALLOW Anywhere
22/tcp (v6) ALLOW Anywhere (v6)
179/tcp (v6) ALLOW Anywhere (v6)
4789/tcp (v6) ALLOW Anywhere (v6)
5473/tcp (v6) ALLOW Anywhere (v6)
443/tcp (v6) ALLOW Anywhere (v6)
6443/tcp (v6) ALLOW Anywhere (v6)
2379/tcp (v6) ALLOW Anywhere (v6)
4149/tcp (v6) ALLOW Anywhere (v6)
10250/tcp (v6) ALLOW Anywhere (v6)
10255/tcp (v6) ALLOW Anywhere (v6)
10256/tcp (v6) ALLOW Anywhere (v6)
$ sudo apt-get update
$ sudo apt-get install -y apt-transport-https ca-certificates curl gnupg2 software-properties-common=
$ curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
$ sudo apt-key fingerprint 0EBFCD88
$ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian buster stable"
$ sudo apt-get update
$ sudo apt-get -y install docker-ce
NOTE: On production system recomend install a fixed version of docker:
$ apt-cache madison docker-ce
$ sudo apt-get install docker-ce=<VERSION>
$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
$ cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
$ sudo apt-get update
$ sudo apt-get install -y kubelet kubeadm kubectl
$ sudo apt-mark hold kubelet kubeadm kubectl
$ sudo kubeadm init --pod-network-cidr=192.168.0.0/16
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-555fc8cc5c-wnnvq 1/1 Running 0 26m
calico-node-sngt8 1/1 Running 0 26m
coredns-66bff467f8-2qqlv 1/1 Running 0 55m
coredns-66bff467f8-vptpr 1/1 Running 0 55m
etcd-kubeadm-ufw-debian10 1/1 Running 0 55m
kube-apiserver-kubeadm-ufw-debian10 1/1 Running 0 55m
kube-controller-manager-kubeadm-ufw-debian10 1/1 Running 0 55m
kube-proxy-nx8cz 1/1 Running 0 55m
kube-scheduler-kubeadm-ufw-debian10 1/1 Running 0 55m
Considerations:
Sorry my ufw rules are a bit messy, I tried too many things to get kubernetes working.
If it does not solve, next steps:
kubectl describe <pod_name> -n kube-system
kubectl get pod <pod_name> -n kube-system
kubectl logs <pod_name> -n kube-system
Let me know in the comments if you find any problem following these troubleshooting steps.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With