My company is using CouchDB, and I'm going to have to interact with it soon, so I'm getting a crash course in it, and as I was reading through various tutorials and examples, I came across one that made wonder: Do many design documents bog down CouchDB?
The specific example I read, which mirrored my own use-case, is one where the middle tier creates a new design document for every customer, limiting all queries and the associated generated b-trees to that customer.
But doesn't this mean that you'll have, at a best-case scenario (from a business point of view) thousands of design documents? It occurs to me that since every single one of those design documents has to be run for every insert, if only to emit nothing, that would end up being a heck of a strain on the server.
Am I missing something essential about the design of CouchDB that makes this a non-issue? Or is there a smarter way to handle this?
I probably would not recommend this approach. Lets consider some cases:
You would create a new design doc, and add it to your db. In this case, when the first view requested from that design doc, it will run through all the docs in the db to create the index. So every new customer will scan all the docs.
Every other document change will be run through all the design doc view functions.
Create a db for each customer. One design doc in each db. Have a master db for aggregation that all the customer db's replicate to.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With