I'm still struggling with what must be basic (and resolved) issues related to CQRS style architecture:
How do we implement business rules that rely on a set of Aggregate Roots?
Take, as an example, a booking application. It may enable you to book tickets for a concert, seats for a movie or a table at a restaurant. In all cases, there's only going to be a limited number of 'items' for sale.
Let's imagine that the event or place is very popular. When sales open for a new event or time slot, reservations start to arrive very quickly - perhaps many per second.
On the query side we can scale massively, and reservations are put on a queue to be handled asynchronously by an autonomous component. At first, when we pull off Reservation Commands from the queue we will accept them, but at a certain time we will have to start rejecting the rest.
How do we know when we reach the limit?
For each Reservation Command we would have to query some sort of store to figure out if we can accommodate the request. This means that we will need to know how many reservations we have already received at that time.
However, if the Domain Store is a non-relational data store such as e.g. Windows Azure Table Storage, we can't very well do a SELECT COUNT(*) FROM ...
One option would be to keep a separate Aggregate Root that simply keeps track of the current count, like this:
The second Aggregate Root would be a denormalized aggregation of the first one, but when the underlying data store doesn't support transactions, then it's very likely that these can get out of sync in high-volume scenarios (which is what we are trying to address in the first place).
One possible solution is to serialize handling of the Reservation Commands so that only one at a time is handled, but this goes against our goals of scalability (and redundancy).
Such scenarios remind me of standard "out of stock" scenarios, but the difference is that we can't very well put the reservation on back order. Once an event is sold out, it's sold out, so I can't see what a compensating action would be.
How do we handle such scenarios?
Implementing CQRS in your application can maximize its performance, scalability, and security. The flexibility created by migrating to CQRS allows a system to better evolve over time and prevents update commands from causing merge conflicts at the domain level.
CQRS is implemented by a separation of responsibilities between commands and queries, and event sourcing is implemented by using the sequence of events to track changes in data.
CQRS is a popular architecture pattern because it addresses a common problem to most enterprise applications. Separating write behavior from read behavior, which the essence of the CQRS architectural pattern, provides stability and scalability to enterprise applications while also improving overall performance.
After thinking about this for some time it finally dawned on me that the underlying problem is less related to CQRS than it is to the non-trasactional nature of disparate REST services.
Really it boils down to this problem: if you need to update several resources, how do you ensure consistency if the second write operation fails?
Let's imagine that we want to write updates to Resource A and Resource B in sequence.
The first write operation can't easily be rolled back in the face of an exception, so what can we do? Catching and suppressing the exception to perform a compensating action against Resource A is not a viable option. First of all it's complex to implement, but secondly it's not safe: what happens if the first exception happened because of a failed network connection? In that scenario, we can't write a compensating action against Resource A either.
The key lies in explicit idempotency. While Windows Azure Queues don't guarantee exactly once semantics, they do guarantee at least once semantics. This means that in the face of intermittent exceptions, the message will later be replayed.
In the previous scenario, this is what happens then:
When all write operations are idempotent, eventual consistency can be achieved with message replays.
Interesting question and with this one you are nailing one of the pain points in CQRS.
The way Amazon is handling this is by having the business scenario cope with an error state if the items requested is sold out. The error state is simply to notify the customer by email that the items requested not currently in stock and the estimated day for shipping.
However - this does not fully answer your question.
Thinking of a scenario of selling tickets I would make sure to the tell the customer that the request they gave were a reservation request. That the reservation request would get processed as soon as possible and that they'll revive the final answer in a mail later. By alowing this some customers might get an email with a rejection to their request.
Now. Could we make this rejestion less painfull? Certainly. By inserting a key in our distributed cache with the percentage or amount of items in stock and decrementing this counter when ever an item is sold. This way we could warn the user before the reservation request is given, let's say if only 10% of the initial number of items is left, that the customer might not be able to get the item in question. If the counter is at zero we would simply refuse to accept any more reservation requests.
My point being:
1) let the user know that it's a request that they are making and that this might get refused 2) inform the user that the chances of success for getting the item in question is low
Not exactly an precise answer to your question but this is how I would handle a scenario like this when dealing with CQRS.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With