Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Loading 5 million rows into Pandas from MySQL

Tags:

pandas

mysql

I have 5 million rows in a MySQL DB sitting over the (local) network (so quick connection, not on the internet).

The connection to the DB works fine, but if I try to do

f = pd.read_sql_query('SELECT * FROM mytable', engine, index_col = 'ID')

This takes a really long time. Even chunking with chunksize will be slow. Besides, I don't really know whether it's just hung there or indeed retrieving information.

I would like to ask, for those people working with large data on a DB, how they retrieve their data for their Pandas session?

Would it be "smarter", for example, to run the query, return a csv file with the results and load that into Pandas? Sounds much more involved than it needs to be.

like image 674
Dervin Thunk Avatar asked Jul 29 '15 13:07

Dervin Thunk


2 Answers

The best way of loading all data from a table out of -any-SQL database into pandas is:

  1. Dumping the data out of the database using COPY for PostgreSQL, SELECT INTO OUTFILE for MySQL or similar for other dialects.
  2. Reading the csv file with pandas using the pandas.read_csv function

Use the connector only for reading a few rows. The power of an SQL database is its ability to deliver small chunks of data based on indices.

Delivering entire tables is something you do with dumps.

like image 54
firelynx Avatar answered Oct 16 '22 10:10

firelynx


I had a similar issue whilst working with an Oracle db (for me it turned out it was taking a long time to retrieve all the data, during which time I had no idea how far it was or whether there was any problem going on) - my solution was to stream the results of my query into a set of csv files, and then upload them into Pandas.

I'm sure there are faster ways of doing this, but this worked surprisingly well for datasets of around 8 million rows.

You can see the code I used at my Github page for easy_query.py but the core function I used looked like this:

def SQLCurtoCSV (sqlstring, connstring, filename, chunksize):
    connection = ora.connect(connstring)
    cursor = connection.cursor()
    params = []
    cursor.execute(sqlstring, params)
    cursor.arraysize = 256
    r=[]
    c=0
    i=0
    for row in cursor:
        c=c+1
        r.append(row)
        if c >= chunksize:
            c = 0
            i=i+1
            df = pd.DataFrame.from_records(r)
            df.columns = [rec[0] for rec in cursor.description]
            df.to_csv(filename.replace('%%',str(i)), sep='|')
            df = None
            r = []
    if i==0:
        df = pd.DataFrame.from_records(r)
        df.columns = [rec[0] for rec in cursor.description]
        df.to_csv(filename.replace('%%',str(i)), sep='|')
        df = None
        r = []

The surrounding module imports cx_Oracle, to provide various database hooks/api-calls, but I'd expect there to be similar functions available using some similarly provided MySQL api.

What's nice is that you can see the files building up in your chosen directory, so you get some kind of feedback as to whether your extract is working, and how many results per second/minute/hour you can expect to receive.

It also means you can work on the initial files whilst the rest are being fetched.

Once all the data is saved down to individual files, they can be loaded up into a single Pandas dataframe using multiple pandas.read_csv and pandas.concat statements.

like image 28
Thomas Kimber Avatar answered Oct 16 '22 11:10

Thomas Kimber