Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Python CSV to SQLite

Tags:

I am "converting" a large (~1.6GB) CSV file and inserting specific fields of the CSV into a SQLite database. Essentially my code looks like:

import csv, sqlite3  conn = sqlite3.connect( "path/to/file.db" ) conn.text_factory = str  #bugger 8-bit bytestrings cur = conn.cur() cur.execute('CREATE TABLE IF NOT EXISTS mytable (field2 VARCHAR, field4 VARCHAR)')  reader = csv.reader(open(filecsv.txt, "rb")) for field1, field2, field3, field4, field5 in reader:   cur.execute('INSERT OR IGNORE INTO mytable (field2, field4) VALUES (?,?)', (field2, field4)) 

Everything works as I expect it to with the exception... IT TAKES AN INCREDIBLE AMOUNT OF TIME TO PROCESS. Am I coding it incorrectly? Is there a better way to achieve a higher performance and accomplish what I'm needing (simply convert a few fields of a CSV into SQLite table)?

**EDIT -- I tried directly importing the csv into sqlite as suggested but it turns out my file has commas in fields (e.g. "My title, comma"). That's creating errors with the import. It appears there are too many of those occurrences to manually edit the file...

any other thoughts??**

like image 499
user735304 Avatar asked May 09 '11 20:05

user735304


People also ask

How do I convert a CSV file to SQLite?

First, from the menu choose tool menu item. Second, choose the database and table that you want to import data then click the Next button. Third, choose CSV as the data source type, choose the CSV file in the Input file field, and choose the ,(comma) option as the Field separator as shown in the picture below.

What is CSV why we are using CSV into SQLite?

The Excel or CSV files can be used to import data within the tables of a database. Therefore, we have decided to cover the method of importing a CSV file containing data in columns to the SQLite database table. Make sure to have SQLite C-library of SQL installed in your Ubuntu 20.04.


2 Answers

Chris is right - use transactions; divide the data into chunks and then store it.

"... Unless already in a transaction, each SQL statement has a new transaction started for it. This is very expensive, since it requires reopening, writing to, and closing the journal file for each statement. This can be avoided by wrapping sequences of SQL statements with BEGIN TRANSACTION; and END TRANSACTION; statements. This speedup is also obtained for statements which don't alter the database." - Source: http://web.utk.edu/~jplyon/sqlite/SQLite_optimization_FAQ.html

"... there is another trick you can use to speed up SQLite: transactions. Whenever you have to do multiple database writes, put them inside a transaction. Instead of writing to (and locking) the file each and every time a write query is issued, the write will only happen once when the transaction completes." - Source: How Scalable is SQLite?

import csv, sqlite3, time  def chunks(data, rows=10000):     """ Divides the data into 10000 rows each """      for i in xrange(0, len(data), rows):         yield data[i:i+rows]   if __name__ == "__main__":      t = time.time()      conn = sqlite3.connect( "path/to/file.db" )     conn.text_factory = str  #bugger 8-bit bytestrings     cur = conn.cur()     cur.execute('CREATE TABLE IF NOT EXISTS mytable (field2 VARCHAR, field4 VARCHAR)')      csvData = csv.reader(open(filecsv.txt, "rb"))      divData = chunks(csvData) # divide into 10000 rows each      for chunk in divData:         cur.execute('BEGIN TRANSACTION')          for field1, field2, field3, field4, field5 in chunk:             cur.execute('INSERT OR IGNORE INTO mytable (field2, field4) VALUES (?,?)', (field2, field4))          cur.execute('COMMIT')      print "\n Time Taken: %.3f sec" % (time.time()-t)  
like image 72
Sam Avatar answered Jan 05 '23 05:01

Sam


It's possible to import the CSV directly:

sqlite> .separator "," sqlite> .import filecsv.txt mytable 

http://www.sqlite.org/cvstrac/wiki?p=ImportingFiles

like image 28
fengb Avatar answered Jan 05 '23 04:01

fengb