I'm trying to implement my own CQRS infrastructure with Event Sourcing to learn it better. As a sample project I'm implementing a blog engine, I know it might not be a perfect fit but I just want to work on something real.
The problem I've come to now is validation. Every post have a shortUrl
, and the shortUrl
should be unique, but where should I put this validation in the domain? I know that I will have that validation before I even send the command by reading from my read store to check if it is valid when creating a create post command or update post command.
I can think of two "solutions".
Blog
aggregate that keep tracks of all blog related settings and also references to all the posts. But the problem with this in my eyes is that I have to handle communication between aggregates in that scenario as well as every time I need to validate the uniqueness of a shortUrl
I need to read all the events from the event store to create all the posts and that seems to complicated.Are there any more alternatives. Note that I know that my domain might not be the best fit for cqrs and DDD, but I'm doing this to learn in a small domain.
CQRS is implemented by a separation of responsibilities between commands and queries, and event sourcing is implemented by using the sequence of events to track changes in data.
CQRS + Event Sourcing Applying Event Sourcing on top of CQRS means persisting each event on the write part of our application. Then the read part is derived from the sequence of events. In my opinion, Event Sourcing is not required when we implement CQRS.
Event sourcing has several benefits: It solves one of the key problems in implementing an event-driven architecture and makes it possible to reliably publish events whenever state changes. Because it persists events rather than domain objects, it mostly avoids the object‑relational impedance mismatch problem.
CQRS is a popular architecture pattern because it addresses a common problem to most enterprise applications. Separating write behavior from read behavior, which the essence of the CQRS architectural pattern, provides stability and scalability to enterprise applications while also improving overall performance.
I would go for an application service that is just responsible for generating unique ShortURL's. You can use a transactional DB to implement this behaviour. Typically this service would be used by the command handling part of the BlogPost aggregate. If there is a duplicate ShortURL, you can fire a DuplicateUrlErrorEvent. You can pre-catch this in the UI (but never 100%) by creating a thin query-model using the same data-source, so you can query whether a shortified URL is unique before submitting the post (as described by @RyanR's answer).
I've read through the various answers on this and the related question.
The decision comes down to correctness. If you can be forgiving and accept imperfect behavior for some degree of a operation, your problem is much simpler to solve, especially under weak consistency guarantees.
However, if you want consistency you should use a persistence service that has strong consistency guarantees.
For example, the command that creates the short URL will validate that the read store does not contain such a short URL already and we will only commit our event, if we can commit the changes to our read store first.
If we can commit our changes to our read store, we have not violated any uniqueness constraint (assuming that your read store enforces such a constraint) and we can then proceed.
However, since we have two transactions not necessarily on the same database we might fail after the first commit. This is OK because the operation will as a whole will also fail. The read store will reflect an inconsistent state for some time but as soon as we repair the aggregate the read store will back in a consistent state.
As a maintenance procedure we could periodically repair aggregates that have been subject to potential errors. And you can do this by introducing an error flag that is only cleared if both transactions commit successfully.
There was an example wherein a bank would allow a user to overdraw their account because they have surcharges for that to compensate. This raises questions because it would seem sloppy to solve a problem like that, lazy even. Some call it smart. I don't know what to think. The bank probably has enough money to cover it, so they might as well ignore it but that's no how the world currently works. Anyway, I digress.
From a correctness stance, our read store has a strong consistency guarantee and we would write our projection in such a way that we cannot commit a transaction to the read store if the balance is put in the negative. As such the worst thing that can happen is that a charge is deducted from the read store but the operation was never fully committed in the event store. The user would see money missing from their account until the maintenance procedure noticed the error flag and healed the account. This I think is a working compromise.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With