(Using Delphi 2010 + latest SQLite in WAL mode)
I'm using DISQLite (Delphi port of SQLite) with my multi-threading client application (yet to be released, so I can change the DB engine if I really have to)
My profiler clearly says it's a stupid decision, I tracked this down to 2-3 very simple SQL statements that flies when executed in a single-threaded app, but because threads locking/waiting (SQLite really doesn't perform well with multiple threads trying to write simultaneously)
I did my best to optimize my code / avoid bottlenecks, but after several weeks of hard work, I wonder now if it's not just easier to dump SQLite & choose a different DB engine(?)
My requirements are:
I'm only using basic INSERT/UPDATE/DELETE commands with indexes, nothing fancy. So my SQL requirements are relatively basic (I don't need join or other "more advanced" sql stuff).
I'm also open to NQL solutions, as long as it support the requirements mentioned above.
My research lead to Berkley DB, which is, if I understood correctly, a modified version of SQLite with concurrent writing support, but the problem is it's not really for delphi.
I also read about Kyoto Cabinet, but then again, no delphi support :(
Any suggestion would be more than welcome,
Thanks!
What is your application speed if:
Then you can try our Sqlite3 static binding which was compiled without thread mutex:
#define SQLITE_THREADSAFE 2
// assuming multi-thread safety is made by caller - in our framework, there is
// only one thread using the database connection at the same time, but there could
// be multiple database connection at the same time (previous was 0 could be unsafe)
#define SQLITE_OMIT_SHARED_CACHE 1
// no need of shared cache in a threadsafe calling model
We use such a model in our mORMot ORM framework, and, associated with four levels of cache:
Resulting performance are not bad at all - it scales well in multi-thread access, even with a global critical section. Of course, SQlite3 was not designed to scale as well as Oracle! But I've used SQlite on real applications, with a lot of client. You may consider using FireBird which has a more complex (and tuned) architecture for client-server.
About making writing faster, you can group your writings into a transaction, then it will be much faster. This is what I use for speed-up writing and you can extend this concept with multiple clients: on the server side, you regroup your writes into a shared transaction, which is to be committed after a timeout period (e.g. one second).
SQLite3 is very fast for such adding (even more with a prepared INSERT statement with bound parameters), but slow for individual addings, because it has to lock the whole file using low-level API, which is damn slow. In order to make it ACID, ensure that the commit is always processed. In fact, other DB engines achieve good concurrent speed with a similar process, hidden in the background. SQLite3 default writing method is expected to be such, in order to ensure access to the same file from multiple processes - but in your Client-Server application, you can just rely on the fact that you'll be the only one to access to the SQLite3 database file, so it will be just safe.
Would something like the Embedded version of Firebird DB be of any help?
FirbirdSQL.org Downloads Page
I've used this with success in the past.
Just split your tables (which could be written concurrently) into separate SQLite database files and attach them all together using your main connection.
FWIW, I finally decided to stick to DISQLite along with this "ugly", hackish, solution:
Made some (not-so-minor) changes to minimize writing to DB inside threads as much as possible (Two DB inserts required in each thread)
When I absolutely had to write something to DB while working inside threads, I took the SQL query parameters & wrote them in a special folder (writing to files is very fast), ie.
C:\my-project\pending-sql\insert_SOME-GUID.txt
Each file would look like this:
Param1|Param2|Param3|Param4|
Once I'm done with the threads (or if my app crashes), I called a routine that scanned this folder, extracted the SQL parameters and run them using prepared statements (wrapped inside a transaction).
Any file containing less than, say, 4 parameters would be considered corrupt and would be skipped.
This is one heck of hackish ugly algorithm (shame on me!), but it works, it's fast, it's (sort of) ACID, and I don't have to spend months learning another DB engine that may (or may not) be suitable.
I just wanted to thank everyone for their help, time pressure makes it impossible for me to switch to another DB engine, at least for this project.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With