I'm trying to wrap my head around the whole CQRS/ES idea, and contemplating writing a proof of concept and technical specification of how to implement it in our current application.
The problematic operations (in terms of how to map them to CQRS/ES) are bulk-updating of complex article data through a file import -- single rows in data files expanding to article groups, articles, headers, units and properties, bulk-loading of files linking buyer assortments to supplier assortments and exporting parts of or entire assortments.
I've read somewhere (may have been the DDDCQRS Google Group) that the best way to model the article import BC (which reads Excel files or other grid files) would be to have a single line of imported data be an aggregate, and an entire import to be the aggregate root. That way, after parsing the file, all I would have to do is create an import aggregate, and for each line, add that line to the import. That would store events in the BC's event store, and publish events that the article management BC would subscribe to. Does this make sense?
In the current system, an import is run in a single, long-running transaction. Long-running should be read as between 5 and 40 minutes, depending on the amount of data imported and on the amount of data already present for a given user (because data is compared with previously imported files and current data). When halfway through the operation fails, currently the whole operation is rolled back. How does that work in CQRS/ES?
Little todo with CQRS/ES. A very naive approach follows:
Whether there is an eventsourced or statebased model behind all that is inferior IMO.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With