Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Data in different resolutions

I have two tables, records are being continuously inserted to these tables from outside source. Lets say these tables are keeping statistics of user interactions. When a user is clicking a button the details of that click (the user, time of click etc.) is written to one of the tables. When a user mouseovers that button a record is added with details to other table.

If there are lots of users constantly interacting with the system, there will be lots of data generated, and those tables will grow enormously.

When I want to look at the data, I want to see it in hourly or daily resolution.

Is there a way, or best practice to continuously summarize the data incrementally (as the data is collected) in the demanded resolution?

Or is there a better approach to this kind of problem?

PS. What I found so far is ETL tools like Talend could make life easy.

Update: I am using MySQL at the moment, but I am wondering the best practices regardless of DB, environment etc.

like image 637
nimcap Avatar asked Jan 07 '10 16:01

nimcap


2 Answers

The normal way to do this on a low-latency data warehouse application is to have a partitioned table with a leading partition containing something that can be updated quickly (i.e. without having to recalculate aggregates on the fly) but with trailing partitions backfilled with the aggregates. In other words, the leading partition can use a different storage scheme to the trailing partitions.

Most commercial and some open-source RDBMS platforms (e.g. PostgreSQL) can support partitioned tables, which can be used to do this type of thing one way or another. How you populate the database from your logs is left as an exercise for the reader.

Basically, the structure of this type of system goes like:

  • You have a table partitioned on some sort of date or date-time value, partitioned by hour, day or whatever grain seems appropriate. The log entries get appended to this table.

  • As the time window slides off a partition, a periodic job indexes or summarises it and converts it into its 'frozen' state. For example, a job on Oracle may create bitmap indexes on that partition or update a materialized view to include summary data for that partition.

  • Later on, you can drop old data, summarize it or merge partitions together.

  • As time goes on, the periodic job back fills behind the leading edge partition. The historical data is converted to a format that lends itself to performant statistical queries while the front edge partition is kept easy to update quickly. As this partition doesn't have so much data, querying across the whole data set is relatively fast.

The exact nature of this process varies between DBMS platforms.

For example, table partitioning on SQL Server is not all that good, but this can be done with Analysis Services (an OLAP server that Microsoft bundles with SQL Server). This is done by configuring the leading partition as pure ROLAP (the OLAP server simply issues a query against the underlying database) and then rebuilding the trailing partitions as MOLAP (the OLAP server constructs its own specialised data structures including persistent summaries known as 'aggregations'). Analysis services can do this completely transparently to the user. It can rebuild a partition in the background while the old ROLAP one is still visible to the user. Once the build is finished it swaps in the partition; the cube is available the whole time with no interruption of service to the user.

Oracle allows partition structures to be updated independently, so indexes can be constructed, or a partition built on a materialized view. With Query re-write, the query optimiser in Oracle can work out that aggregate figures calculated from a base fact table can be obtained from a materialized view. The query will read the aggregate figures from the materialized view where partitions are available and from the leading edge partition where they are not.

PostgreSQL may be able to do something similar, but I've never looked into implementing this type of system on it.

If you can live with periodic outages, something similar can be done explicitly by doing the summarisation and setting up a view over the leading and trailing data. This allows this type of analysis to be done on a system that doesn't support partitioning transparently. However, the system will have a transient outage as the view is rebuilt, so you could not really do this during business hours - the most often would be overnight.

Edit: Depending on the format of the log files or what logging options are available to you, there are various ways to load the data into the system. Some options are:

  • Write a script using your favourite programming language that reads the data, parses out the relevant bits and inserts it into the database. This could run fairly often but you have to have some way of keeping track of where you are in the file. Be careful of locking, especially on Windows. Default file locking semantics on Unix/Linux allow you to do this (this is how tail -f works) but the default behaviour on Windows is different; both systems would have to be written to play nicely with each other.

  • On a unix-oid system you could write your logs to a pipe and have a process similar to the one above reading from the pipe. This would have the lowest latency of all, but failures in the reader could block your application.

  • Write a logging interface for your application that directly populates the database, rather than writing out log files.

  • Use the bulk load API for the database (most if not all have this type of API available) and load the logging data in batches. Write a similar program to the first option, but use the bulk-load API. This but would use less resources than populating it line-by-line, but has more overhead to set up the bulk loads. It would be suitable a less frequent load (perhaps hourly or daily) and would place less strain on the system overall.

In most of these scenarios, keeping track of where you've been becomes a problem. Polling the file to spot changes might be infeasibly expensive, so you may need to set the logger up so that it works in a way that plays nicely with your log reader.

  • One option would be to change the logger so it starts writing to a different file every period (say every few minutes). Have your log reader start periodically and load new files that it hasn't already processed. Read the old files. For this to work, the naming scheme for the files should be based on the time so the reader knows which file to pick up. Dealing with files still in use by the application is more fiddly (you will then need to keep track of how much has been read), so you would want to read files only up to the last period.

  • Another option is to move the file then read it. This works best on filesystems that behave like Unix ones, but should work on NTFS. You move the file, then read it at leasure. However, it requires the logger to open the file in create/append mode, write to it and then close it - not keep it open and locked. This is definitely Unix behaviour - the move operation has to be atomic. On Windows you may really have to stand over the logger to make this work.

like image 96
ConcernedOfTunbridgeWells Avatar answered Oct 05 '22 11:10

ConcernedOfTunbridgeWells


Take a look at RRDTool. It's a round robin database. You define the metrics you want to capture but can also define the resolution that you store it at.

For example, you can specify for the las hour, you keep every seconds worth of information; for the past 24 hours - every minute; for the past week, every hour, etc.

It's widely used to gather stats in systems such as Ganglia and Cacti.

like image 29
Robert Christie Avatar answered Oct 05 '22 11:10

Robert Christie