I have an application that runs on an microservice-based architecture (on Kubernetes). All the communication to/from outside the application happens through an API Gateway.
Which just means that requests from my frontend don't go directly to the services, but they have to go through the Gateway.
Now I need to implement a feature that requires realtime communication between the frontend and an internal service. But since the internal service is not exposed to the outside, I need a way to "route" the realtime data through the Gateway.
All my services are running on Node.js, which is the reason I want to use Socket.IO to implement the realtime communication.
But how to implement the purple double arrow from the sketch?
So usually the frontend client would connect to the server where Socket.IO is running. But in my case this server (the realtime feature server) is not accessible from the client (and never should be), which means that the client has to connect to the Gateway. Thus the Gateway needs to implement some mechanism to route all incoming messages to the realtime service and vice verca.
(1) Have a second HTTP server listening for events on the Gateway and emit those event to the realtime server. In the other direction, the realtime server will emit events to the Gateway, which wiil then emit them to the frontend. I think this approach will definitely work, but it seems redundant to emit everything twice. And it would definitely hurt performance?
(2) Use a Socket.IO Adapter to "pass event between nodes", which seems as the right way to go because it is used to "pass messages between processes or computers". But I have problems getting started because of the lack of documentation / examples. I am also not using Redis (is it needed to use the adapter?)
(3) Use the socket.io-emitter package, which seems not like a good option since the last commit was from 3 years ago.
(4) Something else?
Alright, basically I designed the application like this
Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: centsideas-ingress
annotations:
kubernetes.io/tls-acme: 'true'
kubernetes.io/ingress.class: 'nginx'
cert-manager.io/cluster-issuer: letsencrypt
spec:
tls:
- hosts:
- centsideas.com
- api.centsideas.com
secretName: centsideas-tls
rules:
- host: api.centsideas.com
http:
paths:
- path: /socker.io
backend:
serviceName: socket-service
servicePort: 8000
- path: /
backend:
serviceName: centsideas-gateway
servicePort: 3000
- host: centsideas.com
http:
paths:
- backend:
serviceName: centsideas-client
servicePort: 8080
Service
apiVersion: v1
kind: Service
metadata:
name: socket-service
annotations:
service.beta.kubernetes.io/external-traffic: "OnlyLocal"
namespace: namespace
spec:
sessionAffinity: ClientIP
ports:
- name: ws-port
protocol: TCP
port: 8000
type: ClusterIP
selector:
service: ws-api
Then you create your deployment to deploy the ws-service. Like this, you can also activate k8s HPA (horizontal pod autoscaling) to scale up the socket.io service.
You must change annotations and other options based on your k8s version (I think the annotation service.beta.kubernetes.io/external-traffic: "OnlyLocal"
has been deprecated).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With