Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Pulling data from MySQL into Hadoop

Tags:

mysql

hadoop

I'm just getting started with learning Hadoop, and I'm wondering the following: suppose I have a bunch of large MySQL production tables that I want to analyze.

  1. It seems like I have to dump all the tables into text files, in order to bring them into the Hadoop filesystem -- is this correct, or is there some way that Hive or Pig or whatever can access the data from MySQL directly?
  2. If I'm dumping all the production tables into text files, do I need to worry about affecting production performance during the dump? (Does it depend on what storage engine the tables are using? What do I do if so?)
  3. Is it better to dump each table into a single file, or to split each table into 64mb (or whatever my block size is) files?
like image 858
grautur Avatar asked Jun 19 '10 08:06

grautur


People also ask

Which command import data to Hadoop from a MySQL database?

You can use Sqoop to import data from a relational database management system (RDBMS) such as MySQL or Oracle or a mainframe into the Hadoop Distributed File System (HDFS), transform the data in Hadoop MapReduce, and then export the data back into an RDBMS.

Does Hadoop use MySQL?

Using MySQL Applier for HadoopFor realtime data integration, we can use MySQL Applier for Hadoop. With the MySQL applier, Hadoop / Hive will be integrated as if it is an additional MySQL slave. MySQL Applier will read binlog events from the MySQL and “apply” those to our Hive table.


2 Answers

Importing data from mysql can be done very easily. I recommend you to use Cloudera's hadoop distribution, with it comes program called 'sqoop' which provides very simple interface for importing data straight from mysql (other databases are supported too). Sqoop can be used with mysqldump or normal mysql query (select * ...). With this tool there's no need to manually partition tables into files. But for hadoop it's much better to have one big file.

Useful links:
Sqoop User Guide

like image 103
wlk Avatar answered Oct 27 '22 00:10

wlk


2)
Since I dont know your environment I will aire on the safe, side - YES, worry about affecting production performance.

Depending on the frequency and quantity of data being written, you may find that it processes in an acceptable amount of time, particularly if you are just writing new/changed data. [subject to complexity of your queries]

If you dont require real time or your servers have typically periods when they are under utilized (overnight?) then you could create the files at this time.

Depending on how you have your environment setup, you could replicate/log ship to specific db server(s) who's sole job is to create your data file(s).


3)
No need for you to split the file, HDFS will take care of partitioning the data file into bocks and replicating over the cluster. By default it will automatically split into 64mb data blocks.
see - Apache - HDFS Architecture

re: Wojtek answer - SQOOP clicky (doesn't work in comments)

If you have more questions or specific environment info, let us know HTH Ralph

like image 39
Ralph Willgoss Avatar answered Oct 26 '22 23:10

Ralph Willgoss