Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Multiple gateways possible and how to create gateway with frontend only in JHipster?

I am developing 3 Microservices:

  1. admin facing web app gateway for user management (admin.com) using mysql
  2. public facing web app gateway containing only vuejs frontend (public.com)
  3. REST API Microservice containing the core application using Redis and Cassandra

I can easily generate (1) and (3) but how to generate (2) ?

I tried to use below command to generate (2)

jhipster --skip-server --blueprints vuejs

but jhipster docs says skip-server option does not make sense for Microservice and also jhipster will not configure above as gateway.

https://www.jhipster.tech/separating-front-end-and-api/

How to solve above problem and are multiple gateways possible in the same Microservices based app?

The app will be deployed using Kubernetes.

Side question:

When multiple instances of (2) or (3) are created to handle millions of requests per second, distributed cluster of Redis and Cassandra will be shared by all instances of (3) ? As far as I know each instance of Microservice has its own instance of db such as MySQL. I am new to Microservices and confused about this aspect.

like image 339
ace Avatar asked Oct 15 '22 00:10

ace


1 Answers

I can easily generate (1) and (3) but how to generate (2) ?

Here, I will suggest if possible, segregating UI INFRA completely from the services infra, this make it easy to evolve UI infra and setup independent of backend. Hence, we can create a webpack or deployable out of VueJS app. This deployable can be deployed or hosted in number of different ways.

For local development, it can be a node server running VueJS APP, consuming micro-services deployed on K8s

for prod or test env, you can leverage cloud offerings, just for an example -

AWS Route 53 - > AWS CloudFront(CDN) - > AWS S3 (Webpack/ JS code gets deployed to it, it can scale to billions of request, since (2) is just static JS code and actual data will be fetched by it using XHR calls to micro service backend)

K8s autoscaler can take care of scaling each micro service based on load, spawning and reducing the pods.

How to solve above problem and are multiple gateways possible in the same Microservices based app?

I will suggest to use a third party gateway solution, if you are trying to build a scalable architecture.

Say Kong/Mule Gateway and have multiple routes configured on it, which can then redirect to respective endpoints. This way same gateway solution can serve for multiple needs.

Cloud based solution like AWS API Gateway and AZURE API management service can be helpful as well.

Side question: As far as I know each instance of Microservice has its own instance of db such as MySQL. I am new to Microservices and confused about this aspect.

There could be multiple instances of each microservice, say multiple kubernetes pods in your case for each service, and they should point to same DB endpoint.

The DB endpoint then can resolve to a single or multiple instance using clustered topology. Again, the cluster depends on the high availability requirements. It could be as simple as ACTIVE-REPLICA, where ACTIVE is primary and it can failover to REPLICA.

For point (1), just a suggestion, please check the OIDC implementations such as Okta / KeyCloak, which can be deployed as cluster and comes with UI.

OR look at the open source reference implementation of OIDC, MITREiD, which gives you customizable UI for admin task, which can be used to implement RBAC across UI/backend services.

The way I see the architechture implementation from your desceription, can be -

enter image description here

1.DNS router, routes to the endpoint/URL based on the hostname.

2.If someone access UI app (public.com), the static website (VueJS based web app) gets served from CDN. The actual code can be on a hosted server or AWS S3, which is cheap, highly scalable and widely used to serve websites, nowadays.

3.If app requires Authentication, it can check session token, say a JWT. If its not present get that from User management service, as shown in diagram, it can also be a OIDC implementation. User needs to provide thr credentials offcourse.

4.If an user performs action which needs backend data, or form submission, the VueJS app sends XHR request to required micro service, along with the session token.

5.The calls to micro service are routed via DNS service to your API gateway or it can we a direct call to API gateway endpoint.

6.API gateway should have logic to resolve the JWT, check its validity and authenticity and extract required SCOPES, which can be passed in as custom headers to micro- service. API gateway can consult the user management services to help with user data from user data store. These custom headers can be used for RBAC withing microservice if required. Here, idea is to offload auth to API gateway, so micro services can evolve easily without worrying much about cross cutting concerns. And only authenticated calls get inside your private network.

7.API gateway maps to the different load balancers which in turn can point to K8s Ingress service. This will pass on to required service , which ultimately reach your code in one of the pod running as instance of your micro-service.

8.The micro service can then read- write to DB. Say if load increases at peak time, the autoscaler can scale up, the microservice pods.

  1. Like wise, if admin wants to access admin app, he is presented with admin UI, which connects to user management service and respective user data store

Gateway and Load balancers can be orchestrated in different ways

  • typical example can be as below -

Say services deployed on cloud can follow below pattern where custom domain is mapped to API gateway where user requests comes in and then routed to respective integrations at backend, which will typically be load balancer of the cluster/target group of your services.

enter image description here

Another pattern which is cloud agnostic can be, deploying api gateway solution such as Kong, as a separate container say a public one and then micro services can reside in other private containers.

enter image description here

Here, idea is to put any business critical logic in private network and make it accessible to only allowed services. These services in turn can be publicly accessible. Hence we can reduce surface area for security vulnerability and hence the attack vector.

like image 59
Mahesh_Loya Avatar answered Oct 21 '22 04:10

Mahesh_Loya