Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to update/migrate data when using CQRS and an EventStore?

So I'm currently diving the CQRS architecture along with the EventStore "pattern".

It opens applications to a new dimension of scalability and flexibility as well as testing.

However I'm still stuck on how to properly handle data migration.

Here is a concrete use case:

Let's say I want to manage a blog with articles and comments.

On the write side, I'm using MySQL, and on the read side ElasticSearch, now every time a I process a Command, I persist the data on the write side, dispatch an Event to persist the data on the read side.

Now lets say I've some sort of ViewModel called ArticleSummary which contains an id, and a title.

I've a new feature request, to include the article tags to my ArticleSummary, I would add some dictionary to my model to include the tags.

Given the tags did already exist in my write layer, I would need to update or use a new "table" to properly use the new included data.

I'm aware of the EventLog Replay strategy which consists in replaying all the events to "update" all the ViewModel, but, seriously, is it viable when we do have a billion of rows?

Is there any proven strategies? Any feedbacks?

like image 895
Trent Avatar asked Sep 10 '13 16:09

Trent


People also ask

What is the difference between CQRS and Event Sourcing?

CQRS is implemented by a separation of responsibilities between commands and queries, and event sourcing is implemented by using the sequence of events to track changes in data.

Can we use CQRS without Event Sourcing?

Of course, CQRS can also be used without event sourcing or DDD, just as these concepts work without CQRS. However, there is no denying that the three concepts complement each other very well.

What is the difference between event driven and Event Sourcing?

Event Sourcing is about using events as the state. Event Driven Architecture is about using events to communicate between service boundaries.


2 Answers

I'm aware of the EventLog Replay strategy which consists in replaying all the events to "update" all the ViewModel, but, seriously, is it viable when we do have a billion of rows?

I would say "yes" :)

You are going to write a handler for the new summary feature that would update your query side anyway. So you already have the code. Writing special once-off migration code may not buy you all that much. I would go with migration code when you have to do an initial update of, say, a new system that requires some data transformation once off, but in this case your infrastructure would exist.

You would need to send only the relevant events to the new handler so you also wouldn't replay everything.

In any event, if you have a billion rows of data your servers would probably be able to handle the load :)

like image 77
Eben Roux Avatar answered Nov 01 '22 16:11

Eben Roux


Im currently using the NEventStore by JOliver.

When we started, we were replaying our entire store back through our denormalizers/event handlers when the application started up.

We were initially keeping all our data in memory but knew this approach wouldn't be viable in the long term.

The approach we use currently is that we can replay an individual denormalizer, which makes things a lot faster since you aren't unnecessarily replaying events through denomalizers that haven't changed.

The trick we found though was that we needed another representation of our commits so we could query all the events that we handled by event type - a query that cannot be performed against the normal store.

like image 24
boz Avatar answered Nov 01 '22 15:11

boz