I got a ~10k entries(~30Mo, no attachment) database in my couchDB.
Using Pouchdb browser-side, when replicating from couch, it takes really a while to complete...
What surprise me is the amount of requests my couch receives during it (thousands!, I guess as many as documents) — is that normal ?
Is there a way to "bulk" those requests and generally accelerate the replicating process ?
Thank you.
CouchDB and PouchDB are build around the idea of syncing your data. But not only live sync, but losing the connection, and continuing accessing and changing your data. And once your back online, sync it. With PouchDB on the clients browser and CouchDB on the backend your web app can become offline first capable.
Replication Procedure. During replication, CouchDB will compare the source and the destination database to determine which documents differ between the source and the destination database. It does so by following the Changes Feeds on the source and comparing the documents to the destination.
PouchDB is an open-source, NoSQL, in-line database. It is designed after CouchDB, which is a NoSQL database that powers npm.
PouchDB works offline as well as online with the same efficiency. It works offline by storing the data locally and synchronizing it to the servers and CouchDB when online. It stores data locally using IndexedDB and WebSQL in the browser.
I assume you're using the PouchDB.replicate
function
In that case, try modifying the batch_size
option:
PouchDB.replicate('mydb', 'http://localhost:5984/mydb', {batch_size: large_val})
where large_val
is higher than the default of 100
. The higher the value, the faster the replication should go, but the more memory it will use, so be careful.
See the API reference
Edit: Also note the option batches_limit
which defaults to 10
. This is how many requests may run in parallel at any time, so the number of documents in memory equals batch_size * batches_limit
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With