Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Trying to hack my way around SQLite3 concurrent writing, any better way to do this?

I use Delphi XE2 along with DISQLite v3 (which is basically a port of SQLite3). I love everything about SQLite3, except the lack of concurrent writing, especially that I extensively rely on multi-threading in this project :(

My profiler made it clear I needed to do something about it, so I decided to use this approach:

  1. Whenever I need to insert a record in DB, Instead of doing an INSERT, I write the SQL query in a special foler, ie.

    WriteToFile_Inline(SPECIAL_FOLDER_PATH + '\' + GUID, FileName + '|' + IntToStr(ID) + '|' + Hash + '|' + FloatToStr(ModifDate) + '|' + ...);

  2. I added a timer (in the main app thread) that fires every minute, parse these files and then INSERT the queries using a transaction.

  3. Delete those temporary files at the end.

The result is I have like 500% performance gain. Plus this technique is ACID, as I can always scan the SPECIAL_FOLDER_PATH after a power failure and execute the INSERTs I find.

Despite the good results, I'm not very happy with the method used (hackish to say the least), I keep thinking that if I could have a generics-like with fast lookup access, thread-safe, ACID list, this would be much cleaner (and possibly faster?)

So my question is: do you know anything like that for Delphi XE2?


PS. I trust many of you reading the code above be in shock and will start insulting me at this point! Please be my guest, but if you know a better (ie. faster) ACID approach, please share your thoughts!

like image 924
TheDude Avatar asked Jan 05 '13 23:01

TheDude


3 Answers

Your idea of sending the inserts to a queue, which will rearrange the inserts, and join them via prepared statements is very good. Using a timer in the main thread or a separated thread is up to you. It will avoid any locking.

Do not forget to use a transaction, then commit it every 100/1000 inserts for instance.

About high performance using SQLite3, see e.g. this blog article (and graphic below):

Speed comparison

In this graphic, best performance (file off) comes from:

  • PRAGMA synchronous = OFF
  • Using prepared statements
  • Inside a transaction
  • In WAL mode (especially in concurrency mode)

You may also change the page size, or the journal size, but settings above are the best. See https://stackoverflow.com/search?q=sqlite3+performance

If you do not want to use a background thread, ensure WAL is ON, prepare your statements, use batchs, and regroup your process to release the SQLite3 lock as soon as possible.

The best performance will be achieved by adding a Client-Server layer, just as we did for mORMot.

like image 83
Arnaud Bouchez Avatar answered Nov 17 '22 12:11

Arnaud Bouchez


With files you organized an asynchronous job queue with persistance. It allows you to avoid one-by-one and use batch (records group) approach to insert the records. Comparing one-by-one and batch:

  • first works in auto-commit mode (probably) for each record, second wraps a batch into a single transaction and gives greatest performance gain.
  • first prepares an INSERT command each time when you need to insert a record (probably), second once per batch and gives second by value gain.

I dont think, that SQLite concurrency is a problem in your case (at least not the main issue). Because in SQLite a single insert is comparably fast and concurrency performance issues you will get with high workload. Probably similar results you will get with other DBMS, like Oracle.

To improve your batch approach, consider the following:

  • consider to set journal_mode to WAL and disable shared cache mode.
  • use a background thread to process your queue. Instead of a fixed time interval (1 min), check SPECIAL_FOLDER_PATH more often. And if the queue has more than X Kb of data, then start processing. Or use a count of queued records and event to notify the thread, that the queue should start processing.
  • use multy-record prepared INSERT instead of single-record INSERT. You can build an INSERT for 100 records and process your queue data in a single batch, but by 100 record chanks.
  • consider to write / read a binary field values instead of a text values.
  • consider to use a set of files with preallocated size.
  • etc
like image 33
da-soft Avatar answered Nov 17 '22 12:11

da-soft


sqlite3_busy_timeout is pretty inefficient because it doesn't return immediately when the table it's waiting on is unlocked.

I would try creating a critical section (TCriticalSection?) to protect each table. If you enter the critical section before inserting a row and exit it immediately thereafter, you will create better table locks than SQLite provides.

Without knowing your access patterns, though, it's hard to say if this will be faster than batching up a minute's worth of inserts into single transactions.

like image 1
Gabe Avatar answered Nov 17 '22 12:11

Gabe