I am trying to create a microservice architecture using Lumen / Laravel Passport.
I have a multiple dockerized services, which all run as an separate Lumen app container in different VMs:
All of this services has it’s own separated Redis/MySQL databases e.t.c.
In monolithic application, for example, there was a User table in the database, there was the relations between the tables and so else. I have used JOINs and other queries to retrieve data according to the logical selection for the current user id.
But now I have a general page in the Mobile/Web app for example and I must to get the multiple information from different services for one current visible page.
And to receive this data I am sending multiple requests in the different services
Question:
What is the best/correct practice to store user information using microservices architecture and what is the correct way to retrieve the related data to this user from the other micro services with the minimal memory/time loss? And where to store users information like id, phones e.t.c to avoid the data dublication?
Sorry for possible dublicate. Trying to understand..
The two commonly used protocols are HTTP request/response with resource APIs (when querying most of all), and lightweight asynchronous messaging when communicating updates across multiple microservices. These are explained in more detail in the following sections.
Containers are easiest and effective method to manage the microservice based application. It also helps you to develop and deploy individually. Docker also allows you to encapsulate your microservice in a container image along with its dependencies. Microservice can use these elements without additional efforts.
Let's say you have services: MS1, MS2, MS3, MS4. The web app / mobile app hits MS1 for information. Now MS1 needs to return a response containing data that are managed by MS2, MS3 and MS4.
Poor Solution - MS1 calls MS2, MS3 and MS4 to retrieve information, aggregates them and returns the final aggregated data
Use log-based change data capture (CDC) to generate events from databases of MS2, MS3 and MS4 as and when the DBs are updated by their respective services
Post the events to one or more topics of a streaming platform (e.g. Kafka)
Using stream processing, process the events and create the aggregated data for each user in cache and DB of MS1
Serve the requests to MS1 from the cache and / or DB of MS1
Note, with this approach, the cache or DB will have pre-aggregated data which will be kept up-to-date by the event and stream processing. The updates may lag a little resulting in serving stale data. But the delay shouldn't be more than a few seconds in normal circumstances.
If all the user data can be stored in cache, you can keep the entire data set in cache. Otherwise, you can keep a subset of data in cache with a TTL. The least recently used data can be evicted to make space for new entries. The service will retrieve data from the DB unless itbis not already available in cache.
Advantages:
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With