Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Concurrent writing with sqlite3 [duplicate]

I'm using the sqlite3 python module to write the results from batch jobs to a common .db file. I chose SQLite because multiple processes may try to write at the same time, and as I understand it SQLite should handel this well. What I'm unsure of is what happens when multiple processes finish and try to write at the same time. So if several processes that look like this

conn = connect('test.db')

with conn: 
    for v in xrange(10): 
        tup = (str(v), v)
        conn.execute("insert into sometable values (?,?)", tup)

execute at once, will they throw an exception? Wait politely for the other processes to write? Is there some better way to do this?

like image 552
Shep Avatar asked Aug 13 '13 10:08

Shep


People also ask

Can SQLite handle concurrent writes?

Overview. Usually, SQLite allows at most one writer to proceed concurrently. The BEGIN CONCURRENT enhancement allows multiple writers to process write transactions simultanously if the database is in "wal" or "wal2" mode, although the system still serializes COMMIT commands.

How does SQLite handle concurrency?

Implementation details for the case of concurrent writes. SQLite has a lock table that helps locking the database as late as possible during a write operation to ensure maximum concurrency. The initial state is UNLOCKED, and in this state, the connection has not accessed the database yet.

How many concurrent connections can SQLite handle?

The default limit is 1,024.

Can multiple applications use the same SQLite database?

Yes, they can!


2 Answers

The sqlite library will lock the database per process when writing to the database and each process will wait for the lock to be released to get their turn.

The database doesn't need to be written to until commit time however. You are using the connection as a context manager (good!) so the commit takes place after your loop has completed and all insert statements have been executed.

If your database has uniqueness constraints in place, it may be that the commit fails because one process has already added rows that another process conflicts with.

like image 171
Martijn Pieters Avatar answered Sep 30 '22 19:09

Martijn Pieters


If each process holds it's own connection than it should be fine. What will happen is that when writing the process will lock the DB, so all other process will block. They will throw an exception if the timeout to wait for the DB to be free is exceeded. The timeout can be configured through the connect call:

http://docs.python.org/2/library/sqlite3.html#sqlite3.connect

It is not recommended that you have your DB file in a network share.

Update:

You may also want to check the isolation level: http://docs.python.org/2/library/sqlite3.html#sqlite3.Connection.isolation_level

like image 29
kursancew Avatar answered Sep 30 '22 17:09

kursancew