Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Microservices - Is event store technology (in event sourcing solutions) shared between all microservices?

As far as my little current experience allows me to understand, one of the core concepts about "microservice" is that it relies on its own database which is independent from other microservices.

Diving into how to handle distributed transactions in a microservices system, the best strategy seems to be the Event Sourcing pattern whose core is the Event Store.

Is the event store shared between different microservices? Or there are multiple independent event stores databases for each microservice and a single common event broker?

If the first option is the solution, using CQRS I can now assume that every microservice's database is intended as query-side, while the shared event store is on the command-side. Is it a wrong assumption?

And since we are in the topic: how many retries I have to do in case of a concurrent write in a Stream using optimistic locking?

A very big big thanks in advance for every piece of advice you can give me!

like image 524
Christian Paesante Avatar asked Sep 27 '18 20:09

Christian Paesante


3 Answers

Is the event store shared between different microservices? Or there are multiple independent event stores databases for each microservice and a single common event broker?

Every microservice should write to its own Event store, from their point of view. This could mean separate instances or separate partitions inside the same instance. This allows the microservices to be scaled independently.

If the first option is the solution, using CQRS I can now assume that every microservice's database is intended as query-side, while the shared event store is on the command-side. Is it a wrong assumption?

Kinda. As I wrote above each microservice should have its own Event store (or a partition inside a shared instance). A microservice should not append events to other microservice Event store.

Regarding reading events, I think that reading events should be in general permitted. Polling the Event store is the simplest (and the best in my opinion) solution to propagate changes to other microservices. It has the advantage that the remote microservice polls at the rate it can and what events it wants. This can be scaled very nice by creating Event store replicas, as much as it is needed.

There are some cases when you would want to not publish every domain event from the Event store. Some say that there are could exist internal domain events on that the other microservices should not depend. In this case you could mark the events as free (or not) for external consuming.

The cleanest solution to propagate changes in a microservice is to have live queries to whom other microservices could subscribe. It has the advantage that the projection logic does not leak to other microservice but it also has the disadvantage that the emitting microservice must define+implement those queries; you can do this when you notice that other microservices duplicate the projection logic. An example of this query is the total order price in an ecommerce application. You could have a query like this WhatIsTheTotalPriceOfTheOrder that is published every time an item is added to/removed from/updated in an Order.

And since we are in the topic: how many retries I have to do in case of a concurrent write in a Stream using optimistic locking?

As many as you need, i.e. until the write succeeds. You could have a limit of 99999, just to be detect when something is horribly wrong with the retry mechanism. In any case, the concurrent write should be retried only when a write is done at the same time on the same stream (for one Aggregate instance) and not for the entire Event store.

like image 66
Constantin Galbenu Avatar answered Mar 06 '23 01:03

Constantin Galbenu


As a rule: in service architectures, which includes micro services, each service tracks its state in a private database.

"Private" here primarily means that no other service is permitted to write or read from it. This could mean that each service has a dedicated database server of its own, or services might share a single appliance but only have access permissions for their own piece.

Expressed another way: services communicate with each other by sharing information via the public api, not by writing messages into each others databases.

For services using event sourcing, each service would have read and write access only to its streams. If those streams happen to be stored on the same home - fine; but the correctness of the system should not depend on different services storing their events on the same appliance.

like image 23
VoiceOfUnreason Avatar answered Mar 06 '23 01:03

VoiceOfUnreason


TLDR: All of these patterns apply to a single bounded context (service if you like), don't distribute domain events outside your bounded context, publish integration events onto an ESB (enterprise service bus) or something similar, as the public interface.

Ok so we have three patterns here to briefly cover individually and then together.

  1. Microservices
  2. CQRS
  3. Event Sourcing

Microservices

https://docs.microsoft.com/en-us/azure/architecture/microservices/ Core objective: Isolate and decouple changes in a system to individual services, enabling independent deployment and testing without collateral impact. This is achieved by encapsulating change behind a public API and limiting runtime dependencies between services.

CQRS

https://docs.microsoft.com/en-us/azure/architecture/patterns/cqrs Core objective: Isolate and decouple write concerns from read concerns in a single service. This can be achieved in a few ways, but the core idea is that the read model is a projection of the write model optimised for querying.

Event Sourcing

https://docs.microsoft.com/en-us/azure/architecture/patterns/event-sourcing Core objective: Use the business domain rules as your data model. This is achieved by modelling state as an append-only stream of immutable domain events and rebuilding the current aggregate state by replaying the stream from the start.

All Together

There is a lot of great content here https://docs.microsoft.com/en-us/previous-versions/msp-n-p/jj554200(v=pandp.10) Each of these has its own complexity, trade-offs and challenges and while a fun exercise you should consider if the cost outway the benefits. All of them apply within a single service or bounded context. As soon as you start sharing a data store between services, you open yourself up to issues, as the shared data store can not be changed in isolation as it is now a public interface.

Rather try publish integration events to a shared bus as the public interface for other services and bounded contexts to consume and use to build projections of other domain contexts data.

It's a good idea to publish integration events as idempotent snapshots of the current aggregate state (upsert X, delete X), especially if your bus is not persistent. This allows you to republish integration events from a domain if needed without producing an inconsistent state between consumers.

like image 44
Matthew.Lothian Avatar answered Mar 06 '23 02:03

Matthew.Lothian