Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Kubernetes service with clustered PODs in active/standby

Tags:

Apologies for not keeping this short, as any such attempt would make me miss-out on some important details of my problem.

I have a legacy Java application which works in a active/standby mode in a clustered environment to expose certain RESTful WebServices via a predefined port.

If there are two nodes in my app cluster, at any point in time only one would be in Active mode, and the other in Passive mode, and the requests are always served by the node with app running in Active mode. 'Active' and 'Passive' are just roles, the app as such would be running on both nodes. The Active and Passive instances communicate with each other through this same predetermined port.

Suppose I have a two node cluster with one instance of my application running on each node, then one of the instance would initially be active and the other will be passive. If for some reason the active node goes for a toss for some reason, the app instance in other node identifies this using some heartbeat mechanism, takes over the control and becomes the new active. When the old active comes back up it detects the other guy has owned up the new Active role, hence it goes into Passive mode.

The application manages to provide RESTful webservices on the same endpoint IP irrespective of which node is running the app in 'Active' mode by using a cluster IP, which piggy-backs on the active instance, so the cluster IP switches over to whichever node is running the app in Active mode.

I am trying to containerize this app and run this in a Kubernetes cluster for scale and ease of deployment. I am able to containerize and able to deploy it as a POD in a Kubernetes cluster.

In order to bring in the Active/Passive role here, I am running two instances of this POD, each pinned to a separate K8S nodes using node affinity (each node is labeled as either active or passive, and POD definitions pin on these labels), and clustering them up using my app's clustering mechanism whereas only one will be active and the other will be passive.

I am exposing the REST service externally using K8S Service semantics by making use of the NodePort, and exposing the REST WebService via a NodePort on the master node.

Here's my yaml file content:

apiVersion: v1
kind: Service
metadata:
  name: myapp-service
  labels:
    app: myapp-service
spec:
  type: NodePort
  ports:
    - port: 8443
      nodePort: 30403
  selector:
    app: myapp

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: active
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: myapp
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: nodetype
                operator: In
                values:
                - active
      volumes:
        - name: task-pv-storage
          persistentVolumeClaim:
           claimName: active-pv-claim
      containers:
      - name: active
        image: myapp:latest
        imagePullPolicy: Never
        securityContext:
           privileged: true
        ports:
         - containerPort: 8443
        volumeMounts:
        - mountPath: "/myapptmp"
          name: task-pv-storage

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: passive
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: myapp
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: nodetype
                operator: In
                values:
                - passive
      volumes:
        - name: task-pv-storage
          persistentVolumeClaim:
           claimName: active-pv-claim
      containers:
      - name: passive
        image: myapp:latest
        imagePullPolicy: Never
        securityContext:
           privileged: true
        ports:
         - containerPort: 8443
        volumeMounts:
        - mountPath: "/myapptmp"
          name: task-pv-storage

Everything seems working fine, except that since both PODs are exposing the web service via same port, the K8S Service is routing the incoming requests to one of these PODS in a random fashion. Since my REST WebService endpoints work only on Active node, the service requests work via K8S Service resource only when the request is getting routed to the POD with app in Active role. If at any point in time the K8S Service happens to route the incoming request to POD with app in passive role, the service is inaccessible/not served.

How do I make this work in such a way that the K8S service always routes the requests to POD with app in Active role? Is this something doable in Kubernetes or I'm aiming for too much?

Thank you for your time!

like image 515
msbl3004 Avatar asked Nov 14 '17 17:11

msbl3004


People also ask

Does Kubernetes service do load balancing?

In other words, Kubernetes services are themselves the crudest form of load balancing traffic. In Kubernetes the most basic type of load balancing is load distribution. Kubernetes uses two methods of load distribution. Both of them are easy to implement at the dispatch level and operate through the kube-proxy feature.

Does Kubernetes service use round robin?

By default, kube-proxy in userspace mode chooses a backend via a round-robin algorithm.

Is Kubernetes active active?

Kubernetes makes running your own applications with high availability easier, but it is not automatic.

Can a Kubernetes pod have multiple services?

You can add multiple containers to the same pod, but that's only recommended if the services are tightly coupled, like if they need to communicate. For example, if you have a web server and a sql database, you would likely want them in the same pod.


1 Answers

You can use a readiness probe in conjunction with election container. Election will always elect one master from the election pool, and if you make sure only that pod is marked as ready... only that pod will recieve traffic.

like image 135
Radek 'Goblin' Pieczonka Avatar answered Oct 11 '22 16:10

Radek 'Goblin' Pieczonka