I've got a real little interesting (at least to me) problem to solve (and, no, it is not homework). It is equivalent to this: you need to determine "sessions" and "sessions start and end time" a user has been on in front of his computer.
You get the time at which any user interaction was made and a maximum period of inactivity. If a time greater or equal than the period of inactivity elapsed between two user inputs, then they are part of different sessions.
Basically the input I get are this (the inputs aren't sorted and I'd rather not sort them before determining the sessions):
06:38
07:12
06:17
09:00
06:49
07:37
08:45
09:51
08:29
And, say, a period of inactivity of 30 minutes.
Then I need to find three sessions:
[06:17...07:12]
[07:37...09:00]
[09:51...09:51]
If the period of inactivity is set to 12 hours, then I'd just find one big session:
[06:17...09:51]
How can I solve this simply?
There's a minimum valid period of inactivity, which shall be about 15 minutes.
The reason I'd rather not sort beforehand is that I'll get a lot of data and only storing them in memory be problematic. However, most of these data shall be part of the same sessions (there shall be relatively few sessions compared to the amount of data, maybe something like thousands to 1 [thousands of user inputs per session]).
So far I am thinking about reading an input (say 06:38) and defining an interval [data-max_inactivity...data+max_inactivity] and, for each new input, use a dichotomic (log n) search to see if it falls in a known interval or create a new interval.
I'd repeat this for every input, making the solution n log n AFAICT. Also, the good thing is that it wouldn't use too much memory for it would only create intervals (and most inputs will fall in a known interval).
Also, every time if falls in a known interval, I'd have to change the interval's lower or upper bound and then see if I need to "merge" with the next interval. For example (for a max period of inactivity of 30 minutes):
[06:00...07:00] (because I got 06:30)
[06:00...07:00][07:45...08:45] (because I later got 08:15)
[06:00...08:45] (because I just received 07:20)
I don't know if the description is very clear, but that is what I need to do.
Does such a problem have a name? How would you go about solving it?
EDIT
I'm very interested in knowing which kind of data structure I should use if I plan to solve it the way I plan to. I need both log n search and insertion/merging ability.
Maximum Delay
If the log entries have a "maximum delay" (e.g. with a maximum delay of 2 hours, an 8:12 event will never be listed after a 10:12 event), you could look ahead and sort.
Do Sort
Alternatively, I'd first try sorting - at least to make sure it doesnt work. A timestamp can be reasonably stored in 8 bytes (4 even for your purposes, you could put 250 Millions of then into a gigabyte). Quicksort might not be the best choice here as it has low locality, insertion sort is almost-perfect for almost-sorted data (though it has bad locality, too), alternatively, quick-sorting chunk-wise, then merging chunks with a merge sort should do, even though it increases memory requirements.
Squash and conquer
Alternatively, you can use the following strategy:
If your log files have the kind of "temporal locality" your question suggests, already a single pass should reduce the data to allow a "full" sort.
[edit] [This site]1 demonstrates an "optimized quicksort with insertion sort finish" that's quite good on almost-sorted data. As has this guys std::sort
You are asking for an online algorithm, i.e. one that can calculate a new set of sessions incrementally for each new input time.
Concerning the choice of data structure for the current set of sessions, you can use a balanced binary search tree. Each sessions is represented by a pair (start,end)
of start time and end time. The nodes of the search tree are ordered by their start
time. Since your sessions are separated by at least max_inactivity
, i.e. no two sessions overlap, this will ensure that the end
times are ordered as well. In other words, ordering by start times will already order the sessions consecutively.
Here some pseudo-code for insertion. For notational convenience, we pretend that sessions
is an array, though it's actually a binary search tree.
insert(time,sessions) = do
i <- find index such that
sessions[i].start <= time && time < session[i+1].start
if (sessions[i].start + max_inactivity >= time)
merge time into session[i]
else if (time >= sessions[i+1].start - max_inactivity)
merge time into sessions[i+1]
else
insert (time,time) into sessions
if (session[i] and session[i+1] overlap)
merge session[i] and session[i+1]
The merge
operation can be implemented by deleting and inserting elements into the binary search tree.
This algorithm will take time O(n log m) where m is the maximum number of sessions, which you said is rather small.
Granted, implementing a balanced binary search tree is no easy task, depending on the programming language. The key here is that you have to split the tree according to a key and not every ready-made library supports that operation. For Java, I would use the TreeSet<E>
class; as said, the element type E
is a single session given by start and end time. Its floor()
and ceiling()
methods will retrieve the sessions I've denoted with sessions[i]
and sessions[i+1]
in my pseudo-code.
I am not aware of a name for your problem or a name for the solution that you found. But your solution is (more or less) the solution I would propose. I think it's the best solution for that kind of problem.
If your data is at least somewhat ordered, you might find a slightly better solution by taking this ordering into account. E.g. your data could be ordered by date but not by time. Then, you would separate individual dates.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With