My question comes directly from this one, although I'm only interested on UPDATE and only that.
I have an application written in C/C++
which makes heavy use of SQLite
, mostly SELECT/UPDATE
, on a very frequent interval (about 20 queries every 0.5 to 1 second)
My database is not big, about 2500 records at the moments, here is the table structure:
CREATE TABLE player (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name VARCHAR(64) UNIQUE,
stats VARBINARY,
rules VARBINARY
);
Up to this point I did not used transactions
because I was improving the code and wanted stability rather performance.
Then I measured my database performance by merely executing 10 update
queries, the following (in a loop of different values):
// 10 times execution of this
UPDATE player SET stats = ? WHERE (name = ?)
where stats
is a JSON of exactly 150 characters and name
is from 5-10 characters.
Without transactions, the result is unacceptable: - about 1 full second (0.096 each)
With transactions, the time drops x7.5 times: - about 0.11 - 0.16 seconds (0.013 each)
I tried deleting a large part of the database and/or re-ordering / deleting columns to see if that changes anything but it did not. I get the above numbers even if the database contains just 100 records (tested).
I then tried playing with PRAGMA
options:
PRAGMA synchronous = NORMAL
PRAGMA journal_mode = MEMORY
Gave me smaller times but not always, more like about 0.08 - 0.14 seconds
PRAGMA synchronous = OFF
PRAGMA journal_mode = MEMORY
Finally gave me extremely small times about 0.002 - 0.003 seconds but I don't want to use it since my application saves the database every second and there's a high chance of corrupted database on OS / power failure.
My C SQLite
code for queries is: (comments/error handling/unrelated parts omitted)
// start transaction
sqlite3_exec(db, "BEGIN TRANSACTION", NULL, NULL, NULL);
// query
sqlite3_stmt *statement = NULL;
int out = sqlite3_prepare_v2(query.c_str(), -1, &statement, NULL);
// bindings
for(size_t x = 0, sz = bindings.size(); x < sz; x++) {
out = sqlite3_bind_text(statement, x+1, bindings[x].text_value.c_str(), bindings[x].text_value.size(), SQLITE_TRANSIENT);
...
}
// execute
out = sqlite3_step(statement);
if (out != SQLITE_OK) {
// should finalize the query no mind the error
if (statement != NULL) {
sqlite3_finalize(statement);
}
}
// end the transaction
sqlite3_exec(db, "END TRANSACTION", NULL, NULL, NULL);
As you see, it's a pretty typical TABLE
, records number is small and I'm doing a plain simple UPDATE
exactly 10 times. Is there anything else I could do to decrease my UPDATE
times? I'm using the latest SQLite 3.16.2
.
NOTE: The timings above are coming directly from a single
END TRANSACTION
query. Queries are done into a simple transaction and i'm using a prepared statement.
UPDATE:
I performed some tests with transaction enabled and disabled and various updates count. I performed the tests with the following settings:
VACUUM;
PRAGMA synchronous = NORMAL; -- def: FULL
PRAGMA journal_mode = WAL; -- def: DELETE
PRAGMA page_size = 4096; -- def: 1024
The results follows:
no transactions (10 updates)
no transactions (100 updates)
no transactions (1000 updates)
with transactions (10 updates)
with transactions (100 updates)
with transactions (1000 updates)
My conclusions are that with transactions
there's no sense in time cost per query
. Perhaps the times gets bigger with colossal number of updates but i'm not interested in those numbers. There's literally no time cost difference between 10 and 1000 updates on a single transaction. However i'm wondering if this is a hardware limit on my machine and can't do much. It seems i cannot go below ~100
miliseconds using a single transaction and ranging 10-1000 updates, even by using WAL.
Without transactions there's a fixed time cost of around 0.025
seconds.
With such small amounts of data, the time for the database operation itself is insignificant; what you're measuring is the transaction overhead (the time needed to force the write to the disk), which depends on the OS, the file system, and the hardware.
If you can live with its restrictions (mostly, no network), you can use asynchronous writes by enabling WAL mode.
You may still be limited by the time it takes to commit a transaction. In your first example each transaction took about 0.10 to complete which is pretty close to the transaction time for inserting 10 records. What kind of results do you get if you batch 100 or 1000 updates in a single transaction?
Also, SQLite expects around 60 transactions per second on an average hard drive, while you're only getting about 10. Could your disk performance be the issue here?
https://sqlite.org/faq.html#q19
Try adding INDEXEs to your database:
CREATE INDEX IDXname ON player (name)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With