I want to experiment with using Cassandra as an event store in an event sourcing application. My requirements for an event store are quite simple. The event 'schema' would be something like this:
I am completely new to Cassandra so forgive me for my ignorance in what I'm about to write. I only have two queries that I'd ever want to run on this data.
My idea is to create a Cassandra table in CQL like this:
CREATE TABLE events (
id uuid,
seq_num int,
data text,
timestamp timestamp,
PRIMARY KEY (id, seq_num) );
Does this seem like a sensible way to model the problem? And, importantly, does using a compound primary key allow me to efficiently perform the queries I specified? Remember that, given the use case, there could be a large number of events (with a different seq_num) for the same aggregate root id.
My specific concern is that the second query is going to be inefficient in some way (I'm thinking about secondary indexes here...)
Cassandra doesn't support a relational schema with foreign keys and join tables. So if you want to write a lot of complex join queries, then Cassandra might not be the right database for you.
The core features such as guaranteed writes, concurrency model, granular stream and stream APIs make EventStoreDB the best choice for event-sourced systems - especially when compared with other database solutions originally built for other purposes, And on top of that, it's open source.
Time-series data: Cassandra excels at storing time-series data, where old data does not need to be updated. An example is log files from cloud infrastructure and apps. There is little need to change a log once it has been stored.
Your design seem to be well modeled in "cassandra terms". The queries you need are indeed supported in "composite key" tables, you would have something like:
select * from events where id = 'id_event'
;select * from events where id = 'id_event' and seq_num > NUMBER
;I do not think the second query is going to be inefficient, however it may return a lot of elements... if that is the case you could set a "limit" of events to be returned. If that is possible you can use the limit
keyword.
Using composite keys seems like a good match for your specific requirements. Using "secondary indexes" do not seem to bring much to the table... unless I miss something in your design/requirements.
HTH.
What you've got is good, except in case of many events for a particular aggregate. One thing you could do is create a static column to hold "next" and "max_sequence". The idea being that the static columns would hold the current max sequence for this partition, and the "artificial id" for the next partition. You could then, say, store 100 or 1000 events per partition. What you've essentially done then is bucketed the events for an aggregate into multiple partitions. This would mean additional overhead for querying and storing, but at the same time protect against unbounded growth. You might even create a lookup for partitions for an aggregate. Really depends on your use case and how "clever" you want it to be.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With