Related question: How to use SQLite in a multi-threaded application.
I've been trying to get decent performance out of SQLite3 in a multi-threaded program. I've been very impressed with its performance except for write latency. That's not it's fault, it has to wait for the disk to spin to commit the data. But having reads blocked during those writes, even if they could read from cache, is pretty intolerable.
My use case involves a large number of small read operations to get one tiny object by an indexed field, but latency is important for these operations because there are a lot of them. Writes are large and are accumulated into a single transaction. I don't want reads to have huge latency due to completing writes.
I first just used a single connection with a mutex to protect it. However, while the writing thread is waiting for the transaction to complete, readers are blocked on disk I/O because they can't acquire the mutex until the writer releases it. I tried using multiple connections, but then I get SQLITE_LOCKED
from sqlite3_step
, which means having to redesign all the reading code.
My write logic currently looks like this:
START TRANSACTION
END TRANSACTION
-- here's where it blocksIs there some solution I'm not aware of? Is there an easy way to keep my readers from having to wait for the disk to finish rotating if the entry is in cache without having to rewrite all my reading code to handle SQLITE_LOCKED
, reset, and retry?
To allow multiple readers and one writer to access the database concurrently, enable write-ahead logging.
WAL works well with small transactions, so you don't need to accumulate writes.
Please note that WAL does not work with networked file systems, and for optimal performance, requires regular checkpointing.
First of all, sqlite offers multi-threaded support on it's own. You do not have to use your own mutexes, since you only slow the entire program down. Consult sqlite thread options if you have any doubts.
Using write-ahead log may solve your problems, but it is a double-edged sword. As long as there is a read ongoing, the inserted data will not be written to the main database file and the WAL journal will grow. This is covered in detail in Write-Ahead Logging
I am using sqlite in WAL mode in one of my applications. For small amounts of data it works well. However, when there is a lot of data (several hundred inserts per second, in peaks even more) I experience some issues which I don't seem to be able to fix through any meddling with sqlite configuration.
What you may consider is using several database files, each assigned to a certain time span. This will be applicable only when your queries depend on time.
I am probably running too much ahead. WAL journal should help:)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With