I am working with a larger than average sqlite database (for use on both on windows and linux) and am looking to maximize the performance I get out of it. The database is to be installed on commodity hardware along with an sqlite gui. The users I am delivering this to are sql savvy but are unlikely to undertake their own optimizations (creation of indexes, setting of pragma etc.) so I am keen to get as much out of the box performance as possible (to ensure maximum usage of the data).
One issue Windows seems to throttle the execution of queries much more than Linux and another is that I am less familiar with sqlite's approach to indexing (compared to other databases such as postgres).
Have you read the SQLite Optimization FAQ (a few years old, but still seems useful)?
I don't think 1gb is particularly large, even for SQLite. It can certainly handle much larger databases stably.
An update - we've had the most success with more intelligent ordering of data at import time, aggressive indexing (more than doubling the size of the db file in the process), tuning settings in Windows to get XP to behave more like a server than a desktop, changing the order of joins (you can't solely rely on the optimizer) and by measuring your progress by creating a performance test suite (just a harness for running queries and taking measurements.
The experience has been fun and the end users are happy and are able to work with the data.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With