I've been working on microservices architecture application for the past few months and I am still trying to get used to the distributed nature. There is one pattern I've noticed multiple times that I am not sure what's the preferred way to handle it.
Let's say we have service A and service B and service C. Service A exposes an API where one of the methods depends on calling an API exposed by B to create a resource RB and also depends on an API exposed by C to create resource RC. So in a perfect world A, B, and C are all working fine but the use case I've noticed a few times is that either B or C can be down during the execution of the API logic exposed by A. Even more how it should be tackled when RB is created, C is down so RC cannot be created and we try to rollback the creation of RB by calling let's say /delete/ on a service B but during that time B went down as well. Now RB is created but in the end, it shouldn't since RC failed and the execution of the API logic of A should.
The same A, B & C may be 3 nodes inside a cluster environment trying to propagate data across the cluster when data is posted to one of the nodes.
Sorry for the long text, thanks.
This issue is decades old and there have been many different ways to solve it. The problem is that the type of distributed transaction management needed to actually implement what you are describing is difficult to get right and tends to lead to overly complicated solutions. This complexity is really the reason why things like EJB have gone the way of the dodo.
Over time things have evolved to the point that in most cases it is easier to have things be eventually consistent and offload retrying to different types of message queues etc. (as described by previous posters). Of course there are situations where you simply cannot be eventually consistent, but they are not hard to identify and are the minority.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With