Is it possible to access database in one process, created in another? I tried:
IDLE #1
import sqlite3
conn = sqlite3.connect(':memory:')
c = conn.cursor()
c.execute("create table test(testcolumn)")
c.execute("insert into test values('helloooo')")
conn.commit()
conn.close()
IDLE #2
import sqlite3
conn = sqlite3.connect(':memory:')
c = conn.cursor()
c.execute("select * from test")
Error:
Traceback (most recent call last):
File "<pyshell#5>", line 1, in <module>
q = c.execute("select * from test")
sqlite3.OperationalError: no such table: test
SQLite allows multiple processes to have the database file open at once, and for multiple processes to read the database at once. When any process wants to write, it must lock the entire database file for the duration of its update. But that normally only takes a few milliseconds.
The default limit is 1,024.
and to add to that, sqlite works fine in a multi-process environment, as long as your aware that locking may cause some calls to time-out (fail), and that they then need to be re-tried. I know the thread/process -difference, and i use multiple processes (multiprocessing module with pools).
SQLite in-memory databases are databases stored entirely in memory, not on disk. Use the special data source filename :memory: to create an in-memory database. When the connection is closed, the database is deleted. When using :memory: , each connection creates its own database.
No, they cannot ever access the same in-memory database from different processes Instead, a new connection to :memory:
always creates a new database.
From the SQLite documentation:
Every :memory: database is distinct from every other. So, opening two database connections each with the filename ":memory:" will create two independent in-memory databases.
This is different from an on-disk database, where creating multiple connections with the same connection string means you are connecting to one database.
Within one process it is possible to share an in-memory database if you use the file::memory:?cache=shared
URI:
conn = sqlite3.connect('file::memory:?cache=shared', uri=True)
but this is still not accessible from other another process.
of course I agree with @Martijn because doc says so, but if you are focused on unix like systems, then you can make use of shared memory:
If you create file in /dev/shm
folder, all files create there are mapped directly to RAM, so you can use to access the-same database from two-different processes.
#/bin/bash
rm -f /dev/shm/test.db
time bash -c $'
FILE=/dev/shm/test.db
sqlite3 $FILE "create table if not exists tab(id int);"
sqlite3 $FILE "insert into tab values (1),(2)"
for i in 1 2 3 4; do sqlite3 $FILE "INSERT INTO tab (id) select (a.id+b.id+c.id)*abs(random()%1e7) from tab a, tab b, tab c limit 5e5"; done; #inserts at most 2'000'000 records to db.
sqlite3 $FILE "select count(*) from tab;"'
it takes that much time:
FILE=/dev/shm/test.db
real 0m0.927s
user 0m0.834s
sys 0m0.092s
for at least 2 million records, doing the same on HDD takes (this is the same command but FILE=/tmp/test.db
):
FILE=/tmp/test.db
real 0m2.309s
user 0m0.871s
sys 0m0.138s
so basically this allows you accessing the same databases from different processes (without loosing r/w speed):
Here is demo demonstrating this what I am talking about:
xterm -hold -e 'sqlite3 /dev/shm/testbin "create table tab(id int); insert into tab values (42),(1337);"' &
xterm -hold -e 'sqlite3 /dev/shm/testbin "insert into tab values (43),(1338); select * from tab;"' &
;
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With