I looked at a lot of event sourcing tutorials and all are using simple demos to focus on the tutorials topic (Event sourcing)
That's fine until you hit in a real work application something that is not covered in one of these tutorials :)
I hit something like this. I have two databases, one event-store and one projection-store (Read models) All aggregates have a GUID Id, what was 100% fine until now.
Now I created a new JobAggregate
and a Job Projection.
And it's required by my company to have a unique incremental int64 Job Id.
Now I'm looking stupid :) An additional issue is that a job is created multiple times per second! That means, the method to get the next number have to be really safe.
In the past (without ES) I had a table, defined the PK as auto increment int64, save Job, DB does the job to give me the next number, done.
But how can I do this within my Aggregate or command handler? Normally the projection job is created by the event handler, but that's to late in the process, because the aggregate should have the int64 already. (For replaying the aggregate on an empty DB and have the same Aggregate Id -> Job Id relation)
How should I solve this issue?
Kind regards
What is EventStoreDB? EventStoreDB is an industrial-strength Event Sourcing database that stores your critical data in streams of immutable events. It was built from the ground up for Event Sourcing and offers an unrivaled solution for building event-sourced systems.
Event Sourcing is about using events as the state. Event Driven Architecture is about using events to communicate between service boundaries.
Projections. In Event Sourcing, Projections (also known as View Models or Query Models) provide a view of the underlying event-based data model. Often they represent the logic of translating the source write model into the read model. They are used in both read models and write models.
Event Sourcing is a design pattern in which results of business operations are stored as a series of events. It is an alternative way to persist data. In contrast with state-oriented persistence that only keeps the latest version of the entity state, Event Sourcing stores each state change as a separate event.
In the past (without ES) I had a table, defined the PK as auto increment int64, save Job, DB does the job to give me the next number, done.
There's one important thing to notice in this sequence, which is that the generation of the unique identifier and the persistence of the data into the book of record both share a single transaction.
When you separate those ideas, you are fundamentally looking at two transactions -- one that consumes the id, so that no other aggregate tries to share it, and another to write that id into the store.
The best answer is to arrange that both parts are part of the same transaction -- for example, if you were using a relational database as your event store, then you could create an entry in your "aggregate_id to long" table in the same transaction as the events are saved.
Another possibility is to treat the "create" of the aggregate as a Prepare
followed by a Created
; with an event handler that responds to the prepare event by reserving the long identifier post facto, and then sends a new command to the aggregate to assign the long identifier to it. So all of the consumers of Created
see the aggregate with the long assigned to it.
It's worth noting that you are assigning what is effectively a random long to each aggregate you are creating, so you better dig in to understand what benefit the company thinks it is getting from this -- if they have expectations that the identifiers are going to provide ordering guarantees, or completeness guarantees, then you had best understand that going in.
There's nothing particularly wrong with reserving the long first; depending on how frequently the save of the aggregate fails, you may end up with gaps. For the most part, you should expect to be able to maintain a small failure rate (ie - you check to ensure that you expect the command to succeed before you actually run it).
In a real sense, the generation of unique identifiers falls under the umbrella of set validation; we usually "cheat" with UUIDs by abandoning any pretense of ordering and pretending that the risk of collision is zero. Relational databases are great for set validation; event stores maybe not so much. If you need unique sequential identifiers controlled by the model, then your "set of assigned identifiers" needs to be within an aggregate.
The key phrase to follow is "cost to the business" -- make sure you understand why the long identifiers are valuable.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With