I did some evaluations on CouchDB recently. I found that memory consumption is pretty high for view construction (map & reduce) as well as importing a larger JSON document into CouchDB. I evaluated the view construction function on a Ubuntu system (4 cores, Intel® Xeon® CPU E3-1240 v5 @ 3.50GHz). Here are the results:
It seems that memory consumption is hundreds of times of original JSON dataset. If we use 1 GB dataset, then CouchDB would run out of the memory. Does anyone know the reason why memory consumption is so huge? Many thanks!
I don't know why the memory is so high, but I know it's consistent with CouchDB and you can't really get around it as long as you have large document sizes. I eventually split out the data that I wanted to build views on and then kept the full documents in a separate database for later extraction.
I know that late to answer but I'll leave this answer for someone to benefit. Actually, it's about the caching responses. Couchdb wants to cache the responses to return the results faster. You can handle the issue by setting the caching limits.
Check it: https://docs.couchdb.org/en/latest/config/couchdb.html
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With