Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to reduce SQLite memory consumption?

I'm looking for ways to reduce memory consumption by SQLite3 in my application.

At each execution it creates a table with the following schema:

(main TEXT NOT NULL PRIMARY KEY UNIQUE, count INTEGER DEFAULT 0)

After that, the database is filled with 50k operations per second. Write only.

When an item already exists, it updates "count" using an update query (I think this is called UPSERT). These are my queries:

INSERT OR IGNORE INTO table (main) VALUES (@SEQ);
UPDATE tables SET count=count+1 WHERE main = @SEQ;

This way, with 5 million operations per transaction, I can write really fast to the DB.

I don't really care about disk space for this problem, but I have a very limited RAM space. Thus, I can't waste too much memory.

sqlite3_user_memory() informs that its memory consumption grows to almost 3GB during the execution. If I limit it to 2GB through sqlite3_soft_heap_limit64(), database operations' performance drops to almost zero when reaching 2GB.

I had to raise cache size to 1M (page size is default) to reach a desirable performance.

What can I do to reduce memory consumption?

like image 436
Pedro Alves Avatar asked Mar 06 '13 18:03

Pedro Alves


People also ask

Does SQLite use a lot of memory?

SQLite will refuse to allocate more than about 2GB of memory at one go. (In common use, SQLite seldom ever allocates more than about 8KB of memory at a time so a 2GB allocation limit is not a burden.)

How much data is too much for SQLite?

SQLite database files have a maximum size of about 140 TB. On a phone, the size of the storage (a few GB) will limit your database file size, while the memory size will limit how much data you can retrieve from a query. Furthermore, Android cursors have a limit of 1 MB for the results.

How fast is SQLite in memory?

sqlite or memory-sqlite is faster for the following tasks: select two columns from data (<. 1 millisecond for any data size for sqlite . pandas scales with the data, up to just under 0.5 seconds for 10 million records)

Is SQLite faster than file?

Reading and writing from an SQLite database is often faster than reading and writing individual files from disk. See 35% Faster Than The Filesystem and Internal Versus External BLOBs. The application only has to load the data it needs, rather than reading the entire file and holding a complete parse in memory.


2 Answers

It seems that the high memory consumption may be caused by the fact that too many operations are concentrated in one big transaction. Trying to commit smaller transaction like per 1M operations may help. 5M operations per transaction consumes too much memory.

However, we'd balance the operation speed and memory usage.

If smaller transaction is not an option, PRAGMA shrink_memory may be a choice.

Use sqlite3_status() with SQLITE_STATUS_MEMORY_USED to trace the dynamic memory allocation and locate the bottleneck.

like image 122
Peixu Zhu Avatar answered Sep 28 '22 06:09

Peixu Zhu


I would:

  • prepare the statements (if you're not doing it already)
  • lower the amount of INSERTs per transaction (10 sec = 500,000 sounds appropriate)
  • use PRAGMA locking_mode = EXCLUSIVE; if you can

Also, (I'm not sure if you know) the PRAGMA cache_size is in pages, not in MBs. Make sure you define your target memory in as PRAGMA cache_size * PRAGMA page_size or in SQLite >= 3.7.10 you can also do PRAGMA cache_size = -kibibytes;. Setting it to 1 M(illion) would result in 1 or 2 GB.

I'm curious how cache_size helps in INSERTs though...

You can also try and benchmark if the PRAGMA temp_store = FILE; makes a difference.

And of course, whenever your database is not being written to:

  • PRAGMA shrink_memory;
  • VACUUM;

Depending on what you're doing with the database, these might also help:

  • PRAGMA auto_vacuum = 1|2;
  • PRAGMA secure_delete = ON;

I ran some tests with the following pragmas:

busy_timeout=0;
cache_size=8192;
encoding="UTF-8";
foreign_keys=ON;
journal_mode=WAL;
legacy_file_format=OFF;
synchronous=NORMAL;
temp_store=MEMORY;

Test #1:

INSERT OR IGNORE INTO test (time) VALUES (?);
UPDATE test SET count = count + 1 WHERE time = ?;

Peaked ~109k updates per second.

Test #2:

REPLACE INTO test (time, count) VALUES
(?, coalesce((SELECT count FROM test WHERE time = ? LIMIT 1) + 1, 1));

Peaked at ~120k updates per second.


I also tried PRAGMA temp_store = FILE; and the updates dropped by ~1-2k per second.


For 7M updates in a transaction, the journal_mode=WAL is slower than all the others.


I populated a database with 35,839,987 records and now my setup is taking nearly 4 seconds per each batch of 65521 updates - however, it doesn't even reach 16 MB of memory consumption.


Ok, here's another one:

Indexes on INTEGER PRIMARY KEY columns (don't do it)

When you create a column with INTEGER PRIMARY KEY, SQLite uses this column as the key for (index to) the table structure. This is a hidden index (as it isn't displayed in SQLite_Master table) on this column. Adding another index on the column is not needed and will never be used. In addition it will slow INSERT, DELETE and UPDATE operations down.

You seem to be defining your PK as NOT NULL + UNIQUE. PK is UNIQUE implicitly.

like image 38
Alix Axel Avatar answered Sep 28 '22 06:09

Alix Axel