Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to use SQLAlchemy to dump an SQL file from query expressions to bulk-insert into a DBMS?

Please bear with me as I explain the problem, how I tried to solve it, and my question on how to improve it is at the end.

I have a 100,000 line csv file from an offline batch job and I needed to insert it into the database as its proper models. Ordinarily, if this is a fairly straight-forward load, this can be trivially loaded by just munging the CSV file to fit a schema; but, I had to do some external processing that requires querying and it's just much more convenient to use SQLAlchemy to generate the data I want.

The data I want here is 3 models that represent 3 pre-exiting tables in the database and each subsequent model depends on the previous model. For example:

Model C --> Foreign Key --> Model B --> Foreign Key --> Model A

So, the models must be inserted in the order A, B, and C. I came up with a producer/consumer approach:

 - instantiate a multiprocessing.Process which contains a
 threadpool of 50 persister threads that have a threadlocal 
 connection to a database

 - read a line from the file using the csv DictReader

 - enqueue the dictionary to the process, where each thread creates
 the appropriate models by querying the right values and each
 thread persists the models in the appropriate order

This was faster than a non-threaded read/persist but it is way slower than bulk-loading a file into the database. The job finished persisting after about 45 minutes. For fun, I decided to write it in SQL statements, it took 5 minutes.

Writing the SQL statements took me a couple of hours, though. So my question is, could I have used a faster method to insert rows using SQLAlchemy? As I understand it, SQLAlchemy is not designed for bulk insert operations, so this is less than ideal.

This follows to my question, is there a way to generate the SQL statements using SQLAlchemy, throw them in a file, and then just use a bulk-load into the database? I know about str(model_object) but it does not show the interpolated values.

I would appreciate any guidance for how to do this faster.

Thanks!

like image 409
Mahmoud Abdelkader Avatar asked May 21 '10 08:05

Mahmoud Abdelkader


2 Answers

Ordinarily, no, there's no way to get the query with the values included.

What database are you using though? Cause a lot of databases do have some bulk load feature for CSV available.

  • Postgres: http://www.postgresql.org/docs/8.4/static/sql-copy.html
  • MySQL: http://dev.mysql.com/doc/refman/5.1/en/load-data.html
  • Oracle: http://www.orafaq.com/wiki/SQL*Loader_FAQ

If you're willing to accept that certain values might not be escaped correctly than you can use this hack I wrote for debugging purposes:

'''Replace the parameter placeholders with values'''
params = compiler.params.items()
params.sort(key=lambda (k, v): len(str(k)), reverse=True)
for k, v in params:
    '''Some types don't need escaping'''
    if isinstance(v, (int, long, float, bool)):
        v = unicode(v)
    else:
        v = "'%s'" % v

    '''Replace the placeholders with values
    Works both with :1 and %(foo)s type placeholders'''
    query = query.replace(':%s' % k, v)
    query = query.replace('%%(%s)s' % k, v)
like image 183
Wolph Avatar answered Nov 20 '22 02:11

Wolph


First, unless you actually have a machine with 50 CPU cores, using 50 threads/processes won't help performance -- it will actually make things slower.

Second, I've a feeling that if you used SQLAlchemy's way of inserting multiple values at once, it would be much faster than creating ORM objects and persisting them one-by-one.

like image 2
Marius Gedminas Avatar answered Nov 20 '22 03:11

Marius Gedminas