Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Nginx Ingress Controller - Failed Calling Webhook

I set up a k8s cluster using kubeadm (v1.18) on an Ubuntu virtual machine. Now I need to add an Ingress Controller. I decided for nginx (but I'm open for other solutions). I installed it according to the docs, section "bare-metal":

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-0.31.1/deploy/static/provider/baremetal/deploy.yaml

The installation seems fine to me:

kubectl get all -n ingress-nginx

NAME                                            READY   STATUS      RESTARTS   AGE
pod/ingress-nginx-admission-create-b8smg        0/1     Completed   0          8m21s
pod/ingress-nginx-admission-patch-6nbjb         0/1     Completed   1          8m21s
pod/ingress-nginx-controller-78f6c57f64-m89n8   1/1     Running     0          8m31s

NAME                                         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
service/ingress-nginx-controller             NodePort    10.107.152.204   <none>        80:32367/TCP,443:31480/TCP   8m31s
service/ingress-nginx-controller-admission   ClusterIP   10.110.191.169   <none>        443/TCP                      8m31s

NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/ingress-nginx-controller   1/1     1            1           8m31s

NAME                                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/ingress-nginx-controller-78f6c57f64   1         1         1       8m31s

NAME                                       COMPLETIONS   DURATION   AGE
job.batch/ingress-nginx-admission-create   1/1           2s         8m31s
job.batch/ingress-nginx-admission-patch    1/1           3s         8m31s

However, when trying to apply a custom Ingress, I get the following error:

Error from server (InternalError): error when creating "yaml/xxx/xxx-ingress.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/extensions/v1beta1/ingresses?timeout=30s: Temporary Redirect

Any idea what could be wrong?

I suspected DNS, but other NodePort services are working as expected and DNS works within the cluster.

The only thing I can see is that I don't have a default-http-backend which is mentioned in the docs here. However, this seems normal in my case, according to this thread.

Last but not least, I tried as well the installation with manifests (after removing ingress-nginx namespace from previous installation) and the installation via Helm chart. It has the same result.

I'm pretty much a beginner on k8s and this is my playground-cluster. So I'm open to alternative solutions as well, as long as I don't need to set up the whole cluster from scratch.

Update: With "applying custom Ingress", I mean: kubectl apply -f <myIngress.yaml>

Content of myIngress.yaml

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: my-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - http:
      paths:
      - path: /someroute/fittingmyneeds
        pathType: Prefix
        backend:
          serviceName: some-service
          servicePort: 5000
like image 453
PhotonTamer Avatar asked May 05 '20 14:05

PhotonTamer


3 Answers

Another option you have is to remove the Validating Webhook entirely:

kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission

I found I had to do that on another issue, but the workaround/solution works here as well.

This isn't the best answer; the best answer is to figure out why this doesn't work. But at some point, you live with workarounds.

I'm installing on Docker for Mac, so I used the cloud rather than baremetal version:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.34.1/deploy/static/provider/cloud/deploy.yaml

like image 87
Patrick Gardella Avatar answered Nov 15 '22 22:11

Patrick Gardella


In my case I'd mixed the installations up. I resolved the issue by executing the following steps:

$ kubectl get validatingwebhookconfigurations 

I iterated through the list of configurations received from the above steps and deleted the configuration using

$ `kubectl delete validatingwebhookconfigurations [configuration-name]`
like image 48
J K Avatar answered Nov 16 '22 00:11

J K


In my case I didn't need to delete the ValidatingWebhookConfiguration. The issue was that I was using a private cluster on GCP version 1.17.14-gke.1600. If I got it correctly, on a default Kubernetes installation, the valitaingwebhook API (which of course is running on the master node), is exposed at port 443. But with GCP they changed the port to 8443 due to security reasons because in order to allocate port 443, the service needs to have root access to the node. Since they didn't want that, they changed to 8443. Now, since a private cluster only has the ports 80/443 externally allowed for Ingress on the nodes (that is, all the nodes will only accept requests to these ports), when the Kubernetes tries to validate your Ingress against the validatingwebhook-address:8443 it will fail - it would not fail if it ran on 443. This thread contains more detailed information.

So the current workaround for that, as recommended by Google itself (but very poorly documented) is adding a Firewall rule on GCP, that will allow inbound (Ingress) TCP requests to your master node at port 8443, so that the other nodes within the cluster can reach the master for validatingwebhook API running on it with that very port.

As to how to create the rule, this is how I did it:

  1. Went to Firewall Rules and added a new one.
  2. At the field Network I selected the VPC from which my cluster is.
  3. Direction of traffic I set as Ingress
  4. Action on match to Allow
  5. Targets to Specified target tags
  6. The Target tags can be found on the master node details in a property called Network tags. To find it, I opened a new window, went to my cluster node pools, found the master node pool. Then entered one of the nodes to look for the Virtual Machine details. There I found Network Tags. Copied its value and went back to the Firewall Rule form.
  7. Pasted the copied network tag to the tag field
  8. At Protocols and ports, checked the Specified protocols and ports
  9. Then checked TCP and placed 8443
  10. Saved the rule and applied the manifest again.

NOTE: Most threads out there will say it's the port 9443. It may work. But I first attempted 8443 since it was reported to work on this thread. It worked for me so I didn't even try 9443.

like image 16
Mauricio Avatar answered Nov 15 '22 22:11

Mauricio