I know it is not recommend to use Sqlite over a network drive, for two reasons: performance, and data corruption due to bad implementation of network file systems.
Still, I'd like to explore the possibility of using Sqlite to support a multi-user application with a central database. We already support Sqlite on a single-user machine and several database servers (Oracle, PostgreSQL) for multi-user setups. Using Sqlite for multi-user setup would be convenient to allow lightweight installations on existing IT infrastructures with Windows shared drives (SMB).
We don't care too much about performance, so we are ready to pay the price needed to avoid corruption. Quoting http://www.sqlite.org/atomiccommit.html, section 9.1:
But if you must use a network filesystem to store SQLite database files, consider using a secondary locking mechanism to prevent simultaneous writes to the same database even if the native filesystem locking mechanism malfunctions.
I'm looking on hints and advices to implement such a mechanism, or, more generally, on tips to avoid problems with Sqlite over a Windows shared drive.
For instance, I'm considering the following brain-dead approach to locking: before any (non read-only) SQLite query, try to create an empty file on the same shared folder and keep it open until the end of the query; normally, any other Windows process trying to create a file with the same name while it's still open would block or fail. Would that work? Anything better?
Another suggestion would be to write an API that sits over the SQLite file and does the locking for you.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With