Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to manage ViewModel changes in a CQRS + Event Sourcing Architecture

We are currently evaluating CQRS and Event Sourcing architectures. I am trying to understand what the maintenance implications of using this kind of design are. Two questions I am struggling to find answers to are this:

1) What happens if, after an application has been up and running for a while, there is a new requirement to add an additional field to a ViewModel on the ReadModel database? Say, the Customer Zip Code is required on the CustomerList ViewModel, where it was not previously. So, the extra column can be added to the ViewModel database easily, but how does this get populated? As far as I can see, the only way is to clear the read database, and replay all the events from scratch to build back up the ReadModel Datbase. But, what if the application has been up and running for months, or years (as we hope it will). This could be millions of events to replay, just to add data for a zipcode column.

I have the same concern if, for whatever technical reason, the ReadModel Database got out of sync, or we want to add a new ReadModel database. It seems like the older the application is, and the more it is used, the harder and more expensive this is get an up to date readmodel back. Or am I missing a trick somewhere? Something like ReadModel snapshots?

2) What happens if after all the millions of events have been replayed to build back up the read database, some of the data doesn't line up with what was expected (i.e. it looks wrong). It is thought that perhaps a bug somewhere in the event storing, or denormalizing routines may have caused this (and it seems that if there is one thing you can rely on in coding, it is bugs). How to go about debugging this! It seems like an impossible task. Or maybe, again, I am missing a trick.

I would be interested to hear from anyone who has been running a system like this for a while, how the maintenance and upgrade paths have worked out for you.

Thanks for any time and input.

like image 262
James Avatar asked Apr 07 '11 13:04

James


3 Answers

The beauty of using event sourcing with CQRS is the ability to destroy the read model and rebuild it from scratch, as has been mentioned. For some reason people have this idea that it's going to take a long time after you get above some arbitrary number of events. If you are using a relational database for your read models--and you most likely are--it's easy to open up a transaction, read into all of the events through the handlers and then commit the transaction. It's only when the transaction commits that we actually touch the disk. Everything else is performed in memory so it can be lightning fast. In fact, I wouldn't be surprised to see your system crank through a few million events in just a few minutes, if that.

Rebuilding your read models from scratch should display the exact same way as your everyday method of denormalizing the events into the read models. If not, you've got a bug in your read model denormalization code. The great thing here is that from your message handler perspective there's no difference between an event being received and denormalized into the read model during regular/production scenarios and for read-model rebuild scenarios.

If you do encounter bugs you can easily debug by streaming/copying the production events to your local workstation, setting breakpoints in your handlers, and then running those events through your read model handling code.

like image 50
Jonathan Oliver Avatar answered Dec 10 '22 04:12

Jonathan Oliver


I am somewhat new to CQRS, so this may not be the most advisable route (but iirc I picked it up from one of the CQRS/DDDD mailing lists).

We create a command and corresponding handler specific to the purpose that is expected to be run once then deprecated.

In the handler we use whatever mechanism is convenient, so in your case of adding a zip code field, we may run a one-off query that pulls the zip codes at that moment from another view model and populates the new column. We don't worry to much about architectural purity in these scenarios, as it is expected to be a one-time operation (Rob Conery's Massive has been used with success in these situations).

like image 35
quentin-starin Avatar answered Dec 10 '22 04:12

quentin-starin


I haven't yet got production-ready app using cqrs with event sourcing, so here is just my experience trying to build one.

1) Read Model rebuild. Yep, you basically have to rebuild whole Read Model DB once something in it changes. And if there are lots of events, this may take a long time. So the Read Model rebuilding must be highly optimized (use event batching, etc.). I feel event sourcing fits best in cases when there is high read-write ratio. So for some extremely volatile data, it may be wise not to store it as domain events. But then the question about storage capacity is also not that far away. In any case, you can apply cqrs to just a part of the system, the one where it fits best (e.g., I probably wouldn't store graphical image as part of the event).

2) Debugging. It is highly improbable that there is error in event storing (it should be a framework's concern), and it is always easy to check what events are in the store. As for command to produce expected events, you should have tests here, and these tests will probably be most valuable tests in the system. For denormalizers, you could also have tests, but I wouldn't bother writing tests for trivial denormalizers if their corectness can be seen by naked eye. That being said, I used debugger a few times to find problems in some more complicated denormalizers; it wasn't that much fun trying to determine which event make things go wrong.

like image 35
driushkin Avatar answered Dec 10 '22 04:12

driushkin