Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Running socket.io in Google Container Engine with multiple pods fails

I'm trying to run a socket.io app using Google Container Engine. I've setup the ingress service which creates a Google Load Balancer that points to the cluster. If I have one pod in the cluster all works well. As soon as I add more, I get tons of socket.io errors. It looks like the connections end up going to different pods in the cluster and I suspect that is the problem with all the polling and upgrading socket.io is doing.

I setup the load balancer to use sticky sessions based on IP.

Does this only mean that it will have affinity to a particular NODE in the kubernetes cluster and not a POD?

How can I set it up to ensure session affinity to a particular POD in the cluster?

NOTE: I manually set the sessionAffinity on the cloud load balancer. enter image description here

Here would be my ingress yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: my-ingress
  annotations:
    kubernetes.io/ingress.global-static-ip-name: my-static-ip
spec:
  backend:
    serviceName: my-service
    servicePort: 80

Service

apiVersion: v1
kind: Service
metadata:
  name: my-service
  labels:
    app: myApp
spec:
  sessionAffinity: ClientIP
  type: NodePort
  ports:
    - port: 80
      targetPort: http-port
  selector:
    app: myApp
like image 831
crickeys Avatar asked Jun 29 '17 18:06

crickeys


1 Answers

First off, you need to set "sessionAffinity" at the Ingress resource level, not your load balancer (this is only related to a specific node in the target group):

Here is an example Ingress spec:

apiVersion: extensions/v1beta1  
kind: Ingress  
metadata:  
  name: nginx-test-sticky
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/affinity: "cookie"
    nginx.ingress.kubernetes.io/session-cookie-name: "route"
    nginx.ingress.kubernetes.io/session-cookie-hash: "sha1"
spec:
  rules:
  - host: $HOST
    http:
      paths:
      - path: /
        backend:
          serviceName: $SERVICE_NAME
          servicePort: $SERVICE_PORT

Second, you probably need to tune your ingress-controller to allow longer connection times. Everything else, by default, supports websocket proxying.

If you are still having issues please provide outputs for kubectl describe -oyaml pod/<ingress-controller-pod> and kubectl describe -oyaml ing/<your-ingress-name>

Hope this helps, good luck!

like image 180
yomateo Avatar answered Sep 30 '22 00:09

yomateo