I'm inserting into a table in Sqlite around 220GB of data,
and I noticed it use a lot of Disk I/O, read and write, but doesn't use the computer's memory in any significant way, though there is a lot of free memory, and I don't use commit to often.
I think the disk I/O is my bottle neck not CPU nor Memory. how can I ask it to use more memory, or insert in bulk so it could run faster?
Many applications use SQLite as a cache of relevant content from an enterprise RDBMS. This reduces latency, since most queries now occur against the local cache and avoid a network round-trip. It also reduces the load on the network and on the central database server.
SQLite provides an in-memory cache which you size according to the maximum number of database pages that you want to hold in memory at any given time. Berkeley DB also provides an in-memory cache that performs the same function as SQLite.
SQLite is very fast, and you're only requiring one IO action (on the commit ). Redis is doing significantly more IO since it's over the network. A more apples-to-apples comparison would involve a relational database accessed over a network (like MySQL or PostgreSQL).
With Actian Zen, developers and product managers get all the advantages of SQLite but in a powerful, secure, and scalable engine that can run serverless or as a client-server. Actian Zen is orders of magnitude faster than SQLite.
Review all options in http://www.sqlite.org/pragma.html. You can tuning a lot of performance relative aspect of SQLite in your application.
All I/O activity is for the integrity of data. SQLite by default is very safe.
Your filesystem is also important for the performance. Not all FS play fair with fsync and the (default) config for internal logging of SQLite.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With