We are using event sourcing and construct aggregates from stream of events. I have 2 aggregates - A1 and A2. A1 is used as a template in order to create A2. The size of A1 can be pretty big. The fundamental idea of Event Sourcing is that of ensuring every change to the state of an application is captured in an event object. So to save A2 we have to store a lot of information in the first event.
Is this situation common or creating from template is not good idea? Is there any better way to solve it?
This approach can be more scalable due to the immutability of events. The system only needs to be able to read data from the event store, or append data to the event store.
Indeed, Event Sourcing shines the most when we can work with the business to find the business events. Event Storming and Event Modeling proved that events work great as a way to describe business processes. We can use them as “checkpoints” of our workflow. Events are also essential as a data model.
Event Sourcing is about using events as the state. Event Driven Architecture is about using events to communicate between service boundaries.
When building an event store, the typical approach is to serialize the event and then persist the type of the event, the body of the event (the serialized event itself), an identifier and the time it occurred.
It would help more if you post more concrete examples of your aggregates and events. In general you can create more granular events if it make sense in your situation. Then instead of having 1-1 relationship between Command and Event you will have 1-N relationship, which is completely in-line with the CQRS theory.
So to give you an example:
CreateInvoice : Command
- InvoiceId
- Customer (10 fields)
- Address (5 more fields)
- InvoceLine[] (where each InvoiceLine also have 10 fields or so)
- Rest of 100 or so fields
InvoiceCreated : Event
- InvoiceId
- Customer (10 fields)
- Address (5 more fields)
- InvoceLine[] (where each InvoiceLine also have 10 fields or so)
- Total
- Rest of 100 or so fields
And in Command Handler:
void Handle(CreateInvoce cmd)
{
var invoice = new Invoice(cmd.InvoiceId, cmd.Customer, cmd.Address, cmd.Lines ....)
uow.Register(invoice);
}
where only one InvoceCreated event will be raised.
Instead you can have more granular events:
InvoiceCreated : Event
- InvoiceId
- Customer
- Address
InvoiceLineAdded
- InvoiceId
- Item
- Vat
- Subtotal
- Etc
Then in Command Handler:
void Handle(CreateInvoce cmd)
{
var invoice = new Invoice(cmd.InvoiceId, cmd.Customer, cmd.Address);
foreach (var line in cmd.Lines)
{
invoice.AddLine(line.Item, line.Quantity, line.Price, ...);
}
uow.Register(invoice);
}
Here ctor will raise InvoiceCreated event and AddLine method will raise InvoiceLineAdded event. You can then have events like InvoiceLineChanged/InvoiceLineRemoved, which you can use with updates.
That will allow you to have more granular events while still allowing more coarse-grained commands to be issued.
Big commands are ok, when they represent atomic actions from user/system PoV.
P.S. About using aggregate as templates, I wouldn't bother with that and instead will create data structure to serve as an accumulator to collect intermediate state. It could then be simply serialized/de-serialized. If there is no behavior behind filling a template - then you don't need aggregates at all. It's just a blob of data that will later be used to create an aggregate and run business rules. You're probably using this "template" object to store user entered state between multiple requests, like a session state, right ;)?
Hope that helps.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With