Even though I have intensely read about the transactional integrity on NEventStore, I cannot grasp how NEventStore would really scale when having many instances of NEventStore wired up.
To summarize my understanding, an Event is added to commits as undispatched, then it publishes to dispatchers and then marked as dispatched.
At the same time, whenever you wire up NEventStore, it goes to look for undispatched events, and then dispatches them and marks the event as dispatched.
But then there must be a short time span when a wiring up of a new event store will see undispatched events which are about to be dispatched (from other stores). The new event store will dispatch the events again.
Think of this architecture:
Client -> Command Bus -> Command Handler -> EventStore persist -> Dispatch to Event Handlers
If we have many Command Handlers
to handle our load, we would also be persisting many events.
If we are often disposing or creating Command Handlers
, then many EventStores would be wired up and cause dispatching of events already being dispatched.
I understand that consumers of the dispatcher should be idempotent, which is not my issue. My issue is whether we would be providing an unneeded amount of load on the consumers the the command handlers in a high load situation?
This question is really old, but as I just stumbled over it, I'll add my two cents: I don't think you're supposed to wire up a new instance of NEventStore for every instance of a command handler. The NEventStore objects are stateless, so you can use a single instance for your whole process (or AppDomain
).
So, sure, if you have multiple processes, each of those will wire up a new NEventStore and your scenario might become reality. However, the effect will probably be very small and not hamper scalability too much.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With