I'm trying to create an in-memory database using sqlite3 in Python.
I created a function to create a db database file and store information in to it and that is working 100%.
But trying to connect with :memory: I've faced some problems.
What I'm doing is:
import sqlite3
def execute_db(*args):
    db = sqlite3.connect(":memory:")
    cur = db.cursor()
    data = True
    try:
        args = list(args)
        args[0] = args[0].replace("%s", "?").replace(" update "," `update` ")
        args = tuple(args)
        cur.execute(*args)
        arg = args[0].split()[0].lower()
        if arg in ["update", "insert", "delete", "create"]: db.commit()
    except Exception as why:
        print why
        data = False
        db.rollback()
    db.commit()
    db.close()
    return data
create name table
execute_db("create table name(name text)")
which returned True
insert some information to this table
execute_db("insert into name values('Hello')")
which returned
no such table: name
False
Why doesn't this work? It works when I use a file:
db = sqlite3.connect("sqlite3.db")
                SQLite in-memory databases are databases stored entirely in memory, not on disk. Use the special data source filename :memory: to create an in-memory database. When the connection is closed, the database is deleted.
Use create inmemory database to create an in-memory database, using model or another user database as its template. You can also create temporary databases as in-memory databases that reside entirely in in-memory storage. However, you cannot specify a template database for an in-memory temporary database.
To create a new database in SQLite you need to specify databases when opening SQLite, or open an existing file using . open . You can use the . database command to see a list of existing databases.
SQLite packages the entire database into a single file. That single file contains the database layout as well as the actual data held in all the different tables and indexes. The file format is cross-platform and can be accessed on any machine, regardless of native byte order or word size.
You create a new connection each time you call the function. Each connection call produces a new in-memory database.
Create the connection outside of the function, and pass it into the function, or create a shared memory connection:
db = sqlite3.connect("file::memory:?cache=shared")
However, the database will be erased when the last connection is deleted from memory; in your case that'll be each time the function ends.
Rather than explicitly call db.commit(), just use the database connection as a context manager:
try:
    with db:
        cur = db.cursor()
        # massage `args` as needed
        cur.execute(*args)
        return True
except Exception as why:
    return False
The transaction is automatically committed if there was no exception, rolled back otherwise. Note that it is safe to commit a query that only reads data.
I created a dataframe and dumped it into a memory db with a shared cache:
#sql_write.py
import sqlite3
import pandas as pd
conn = sqlite3.connect('file:cachedb?mode=memory&cache=shared')
cur  = conn.cursor()
df
          DT      Bid      Ask
0         2020-01-06 00:00:00.103000  1.11603  1.11605
1         2020-01-06 00:00:00.204000  1.11602  1.11605
...                              ...      ...      ...
13582616  2020-06-01 23:59:56.990000  1.11252  1.11256
13582617  2020-06-01 23:59:58.195000  1.11251  1.11255
[13582618 rows x 3 columns]
df.to_sql("ticks", conn, if_exists="replace")
Read from the memory db in another thread / script:
#sql_read.py
import sqlite3
import pandas as pd
conn = sqlite3.connect('file:cachedb?mode=memory&cache=shared')
cur  = conn.cursor()
df = pd.read_sql_query("select * from ticks", conn)
df
          DT      Bid      Ask
0         2020-01-06 00:00:00.103000  1.11603  1.11605
1         2020-01-06 00:00:00.204000  1.11602  1.11605
...                              ...      ...      ...
13582616  2020-06-01 23:59:56.990000  1.11252  1.11256
13582617  2020-06-01 23:59:58.195000  1.11251  1.11255
[13582618 rows x 3 columns]
Note that it's a 15-second read from in memory, on 1.35 million rows (python 2.7). If I pickle the same dataframe and open it, the read takes only 0.3 seconds: that was very disappointing to discover, as I was hoping to dump a huge table into memory and pull it up anywhere I wanted instantly. But there you go, pickle it is.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With