Currently, I'm trying to create a Kubernetes cluster on Google Cloud with two load balancers: one for backend (in Spring boot) and another for frontend (in Angular), where each service (load balancer) communicates with 2 replicas (pods). To achieve that, I created the following ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: sample-ingress
spec:
rules:
- http:
paths:
- path: /rest/v1/*
backend:
serviceName: sample-backend
servicePort: 8082
- path: /*
backend:
serviceName: sample-frontend
servicePort: 80
The ingress above mentioned can make the frontend app communicate with the REST API made available by the backend app. However, I have to create sticky sessions, so that every user communicates with the same POD because of the authentication mechanism provided by the backend. To clarify, if one user authenticates in POD #1, the cookie will not be recognized by POD #2.
To overtake this issue, I read that the Nginx-ingress manages to deal with this situation and I installed through the steps available here: https://kubernetes.github.io/ingress-nginx/deploy/ using Helm.
You can find below the diagram for the architecture I'm trying to build:
With the following services (I will just paste one of the services, the other one is similar):
apiVersion: v1
kind: Service
metadata:
name: sample-backend
spec:
selector:
app: sample
tier: backend
ports:
- protocol: TCP
port: 8082
targetPort: 8082
type: LoadBalancer
And I declared the following ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: sample-nginx-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/affinity-mode: persistent
nginx.ingress.kubernetes.io/session-cookie-hash: sha1
nginx.ingress.kubernetes.io/session-cookie-name: sample-cookie
spec:
rules:
- http:
paths:
- path: /rest/v1/*
backend:
serviceName: sample-backend
servicePort: 8082
- path: /*
backend:
serviceName: sample-frontend
servicePort: 80
After that, I run kubectl apply -f sample-nginx-ingress.yaml
to apply the ingress, it is created and its status is OK. However, when I access the URL that appears in "Endpoints" column, the browser can't connect to the URL.
Am I doing anything wrong?
** Updated service and ingress configurations **
After some help, I've managed to access the services through the Ingress Nginx. Above here you have the configurations:
The paths shouldn't contain the "", unlike the default Kubernetes ingress that is mandatory to have the "" to route the paths I want.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: sample-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "sample-cookie"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
spec:
rules:
- http:
paths:
- path: /rest/v1/
backend:
serviceName: sample-backend
servicePort: 8082
- path: /
backend:
serviceName: sample-frontend
servicePort: 80
Also, the services shouldn't be of type "LoadBalancer" but "ClusterIP" as below:
apiVersion: v1
kind: Service
metadata:
name: sample-backend
spec:
selector:
app: sample
tier: backend
ports:
- protocol: TCP
port: 8082
targetPort: 8082
type: ClusterIP
However, I still can't achieve sticky sessions in my Kubernetes Cluster, once I'm still getting 403 and even the cookie name is not replaced, so I guess the annotations are not working as expected.
What is a sticky session. Session stickiness, a.k.a., session persistence, is a process in which a load balancer creates an affinity between a client and a specific network server for the duration of a session, (i.e., the time a specific IP spends on a website).
Session affinity, also known as “sticky sessions”, is the function of the load balancer that directs subsequent requests from each unique session to the same Dgraph in the load balancer pool.
Read from Cache and DB eventually for consistency, while write all your cached data into DB at regular intervals. The Cache and DB datastructure should have expiration time as well and you can reset the time everytime an action happens. This is an approach if you want to avoid Sticky sessions.
I looked into this matter and I have found solution to your issue.
To achieve sticky session for both paths you will need two definitions of ingress.
I created example configuration to show you the whole process:
Steps to reproduce:
I assume that the cluster is provisioned and is working correctly.
Follow this Ingress link to find if there are any needed prerequisites before installing Ingress controller on your infrastructure.
Apply below command to provide all the mandatory prerequisites:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
Run below command to apply generic configuration to create a service:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml
Below are 2 example deployments to respond to the Ingress traffic on specific services:
hello.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello
spec:
selector:
matchLabels:
app: hello
version: 1.0.0
replicas: 5
template:
metadata:
labels:
app: hello
version: 1.0.0
spec:
containers:
- name: hello
image: "gcr.io/google-samples/hello-app:1.0"
env:
- name: "PORT"
value: "50001"
Apply this first deployment configuration by invoking command:
$ kubectl apply -f hello.yaml
goodbye.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: goodbye
spec:
selector:
matchLabels:
app: goodbye
version: 2.0.0
replicas: 5
template:
metadata:
labels:
app: goodbye
version: 2.0.0
spec:
containers:
- name: goodbye
image: "gcr.io/google-samples/hello-app:2.0"
env:
- name: "PORT"
value: "50001"
Apply this second deployment configuration by invoking command:
$ kubectl apply -f goodbye.yaml
Check if deployments configured pods correctly:
$ kubectl get deployments
It should show something like that:
NAME READY UP-TO-DATE AVAILABLE AGE
goodbye 5/5 5 5 2m19s
hello 5/5 5 5 4m57s
To connect to earlier created pods you will need to create services. Each service will be assigned to one deployment. Below are 2 services to accomplish that:
hello-service.yaml:
apiVersion: v1
kind: Service
metadata:
name: hello-service
spec:
type: NodePort
selector:
app: hello
version: 1.0.0
ports:
- name: hello-port
protocol: TCP
port: 50001
targetPort: 50001
Apply first service configuration by invoking command:
$ kubectl apply -f hello-service.yaml
goodbye-service.yaml:
apiVersion: v1
kind: Service
metadata:
name: goodbye-service
spec:
type: NodePort
selector:
app: goodbye
version: 2.0.0
ports:
- name: goodbye-port
protocol: TCP
port: 50001
targetPort: 50001
Apply second service configuration by invoking command:
$ kubectl apply -f goodbye-service.yaml
Take in mind that in both configuration lays type: NodePort
Check if services were created successfully:
$ kubectl get services
Output should look like that:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
goodbye-service NodePort 10.0.5.131 <none> 50001:32210/TCP 3s
hello-service NodePort 10.0.8.13 <none> 50001:32118/TCP 8s
To achieve sticky sessions you will need to create 2 ingress definitions.
Definitions are provided below:
hello-ingress.yaml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "hello-cookie"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/affinity-mode: persistent
nginx.ingress.kubernetes.io/session-cookie-hash: sha1
spec:
rules:
- host: DOMAIN.NAME
http:
paths:
- path: /
backend:
serviceName: hello-service
servicePort: hello-port
goodbye-ingress.yaml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: goodbye-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "goodbye-cookie"
nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/affinity-mode: persistent
nginx.ingress.kubernetes.io/session-cookie-hash: sha1
spec:
rules:
- host: DOMAIN.NAME
http:
paths:
- path: /v2/
backend:
serviceName: goodbye-service
servicePort: goodbye-port
Please change DOMAIN.NAME
in both ingresses to appropriate to your case.
I would advise to look on this Ingress Sticky session link.
Both Ingresses are configured to HTTP only traffic.
Apply both of them invoking command:
$ kubectl apply -f hello-ingress.yaml
$ kubectl apply -f goodbye-ingress.yaml
Check if both configurations were applied:
$ kubectl get ingress
Output should be something like this:
NAME HOSTS ADDRESS PORTS AGE
goodbye-ingress DOMAIN.NAME IP_ADDRESS 80 26m
hello-ingress DOMAIN.NAME IP_ADDRESS 80 26m
Open your browser and go to http://DOMAIN.NAME
Output should be like this:
Hello, world!
Version: 1.0.0
Hostname: hello-549db57dfd-4h8fb
Hostname: hello-549db57dfd-4h8fb
is the name of the pod. Refresh it a couple of times.
It should stay the same.
To check if another route is working go to http://DOMAIN.NAME/v2/
Output should be like this:
Hello, world!
Version: 2.0.0
Hostname: goodbye-7b5798f754-pbkbg
Hostname: goodbye-7b5798f754-pbkbg
is the name of the pod. Refresh it a couple of times.
It should stay the same.
To ensure that cookies are not changing open developer tools (probably F12) and navigate to place with cookies. You can reload the page to check if they are not changing.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With