Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Run analytics on huge MySQL database

I have a MySQL database with a few (five to be precise) huge tables. It is essentially a star topology based data warehouse. The table sizes range from 700GB (fact table) to 1GB and whole database goes upto 1 terabyte. Now I have been given a task of running analytics on these tables which might even include joins. A simple analytical query on this database can be "find number of smokers per state and display it in descending order" this requirement could be converted in a simple query like

select state, count(smokingStatus) as smokers 
from abc 
having smokingstatus='current smoker' 
group by state....

This query (and many other of same nature) takes a lot of time to execute on this database, time taken is in order of tens of hours.

This database is also heavily used for insertion which means every few minutes there are thousands of rows getting added.

In such a scenario how can I tackle this querying problem? I have looked in Cassandra which seemed easy to implement but I am not sure if it is going to be as easy for running analytical queries on the database especially when I have to use "where clause and group by construct"

Have Also looked into Hadoop but I am not sure how can I implement RDBMS type queries. I am not too sure if I want to right away invest in getting at least three machines for name-node, zookeeper and data-nodes!! Above all our company prefers windows based solutions.

I have also thought of pre-computing all the data in a simpler summary tables but that limits my ability to run different kinds of queries.

Are there any other ideas which I can implement?

EDIT

Following is the mysql environment setup

1) master-slave setup 2) master for inserts/updates 3) slave for reads and running stored procedures 4) all tables are innodb with files per table 5) indexes on string as well as int columns.

Pre-calculating values is an option but since requirements for this kind of ad-hoc aggregated values keeps changing.

like image 472
Sap Avatar asked Mar 19 '12 20:03

Sap


People also ask

Can MySQL be used for analytics?

MySQL can be an effective solution for many analytics projects. It has many qualities that make it an ideal database environment to implement analytics projects.

Can MySQL handle large databases?

MySQL was not designed for running complicated queries against massive data volumes (which requires crunching through a lot of data on a huge scale). MySQL optimizer is quite limited, executing a single query at a time using a single thread.

Can MySQL handle billion rows?

The largest table we had was literally over a billion rows. This was using MySQL 5.0, so it's possible that things may have improved. It worked. MySQL processed the data correctly most of the time.

What is the max size of MySQL database?

You are using a MyISAM table and the space required for the table exceeds what is permitted by the internal pointer size. MyISAM permits data and index files to grow up to 256TB by default, but this limit can be changed up to the maximum permissible size of 65,536TB (2567 − 1 bytes).


2 Answers

Looking at this from the position of attempting to make MySQL work better rather than positing an entirely new architectural system:

Firstly, verify what's really happening. EXPLAIN the queries which are causing issues, rather than guessing what's going on.

Having said that, I'm going to guess as to what's going on since I don't have the query plans. I'm guessing that (a) your indexes aren't being used correctly and you're getting a bunch of avoidable table scans, (b) your DB servers are tuned for OLTP, not analytical queries, (c) writing data while reading is causing things to slow down greatly, (d) working with strings just sucks and (e) you've got some inefficient queries with horrible joins (everyone has some of these).

To improve things, I'd investigate the following (in roughly this order):

  • Check the query plans, make sure the existing indexes are being used correctly - look at the table scans, make sure the queries actually make sense.

  • Move the analytical queries off the OLTP system - the tunings required for fast inserts and short queries are very different to those for the sorts of queries which potentially read most of a large table. This might mean having another analytic-only slave, with a different config (and possibly table types - I'm not sure what the state of the art with MySQL is right now).

  • Move the strings out of the fact table - rather than having the smoking status column with string values of (say) 'current smoker', 'recently quit', 'quit 1+ years', 'never smoked', push these values out to another table, and have the integer keys in the fact table (this will help the sizes of the indexes too).

  • Stop the tables from being updated while the queries are running - if the indexes are moving while the query is running I can't see good things happening. It's (luckily) been a long time since I cared about MySQL replication, so I can't remember if you can batch up the writes to the analytical query slave without too much drama.

  • If you get to this point without solving the performance issues, then it's time to think about moving off MySQL. I'd look at Infobright first - it's open source/$$ & based on MySQL, so it's probably the easiest to put into your existing system (make sure the data is going to the InfoBright DB, then point your analytical queries to the Infobright server, keep the rest of the system as it is, job done), or if Vertica ever releases its Community Edition. Hadoop+Hive has a lot of moving parts - its pretty cool (and great on the resume), but if it's only going to be used for the analytic portion of you system it may take more care & feeding than other options.

like image 140
Jamie Avatar answered Sep 24 '22 22:09

Jamie


1 TB is not that big. MySQL should be able to handle that. At least simple queries like that shouldn't take hours! Can't be very helpful without knowing the larger context, but I can suggest some questions that you might ask yourself, mostly related to how you use your data:

  • Is there a way you can separate the reads and writes? How many read so you do per day and how many writes? Can you live with some lag, e.g write to a new table each day and merge it to the existing table at the end of the day?

  • What are most of your queries like? Are they mostly aggregation queries? Can you do some partial aggregation beforehand? Can you pre-calculate number of new smokers every day?

  • Can you use hadoop for the aggregation process above? Hadoop is kinda good at that stuff. Basically use hadoop just for daily or batch processing and store the results into the DB.

  • On the DB side, are you using InnoDB or MyISAM? Are the indices on String columns? Can you make it ints etc.?

Hope that helps

like image 30
Hari Menon Avatar answered Sep 24 '22 22:09

Hari Menon