Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Storing time ranges in cassandra

Tags:

cassandra

I'm looking for a good way to store data associated with a time range, in order to be able to efficiently retrieve it later.

Each entry of data can be simplified as (start time, end time, value). I will need to later retrieve all the entries which fall inside a (x, y) range. In SQL, the query would be something like

SELECT value FROM data WHERE starttime <= x AND endtime >= y

Can you suggest a structure for the data in Cassandra which would allow me to perform such queries efficiently?

like image 933
Flavio Avatar asked Jan 12 '11 09:01

Flavio


1 Answers

This is an oddly difficult thing to model efficiently.

I think using Cassandra's secondary indexes (along with a dummy indexed value which is unfortunately still needed at the moment) is your best option. You'll need to use one row per event with at least three columns: 'start', 'end', and 'dummy'. Create a secondary index on each of these. The first two can be LongType and the last can be BytesType. See this post on using secondary indexes for more details. Since you have to use an EQ expression on at least one column for a secondary index query (the unfortunate requirement I mentioned), the EQ will be on 'dummy', which can always set to 0. (This means that the EQ index expression will match every row and essentially be a no-op.) You can store the rest of the event data in the row alongside start, end, and dummy.

In pycassa, a Python Cassandra client, your query would look like this:

from pycassa.index import *
start_time = 12312312000
end_time = 12312312300
start_exp = create_index_expression('start', start_time, GT)
end_exp = create_index_expression('end', end_time, LT)
dummy_exp = create_index_expression('dummy', 0, EQ)
clause = create_index_clause([start_exp, end_exp, dummy_exp], count=1000)
for result in entries.get_indexed_slices(clause):
    # do stuff with result

There should be something similar in other clients.

The alternative that I considered first involved OrderPreservingPartitioner, which is almost always a Bad Thing. For the index, you would use the start time as the row key and the finish time as the column name. You could then perform a range slice with start_key=start_time and column_finish=finish_time. This would scan every row after the start time and only return those with columns before the finish_time. Not very efficient, and you have to do a big multiget, etc. The built-in secondary index approach is better because nodes will only index local data and most of the boilerplate indexing code is handled for you.

like image 79
Tyler Hobbs Avatar answered Oct 04 '22 05:10

Tyler Hobbs