Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Solr always use more than 90% of physical memory

Tags:

solr

I have 300000 documents stored in solr index. And used 4GB RAM for solr server. But It consumes more than 90% of physical memory. So I moved to my data to a new server which has 16 GB RAM. Again solr consumes more than 90% memory. I don't know how to resolve this issue. I used default MMapDirectory and solr version 4.2.0. Explain me if you have any solution or the reason for this.

like image 336
user3142114 Avatar asked Mar 03 '14 14:03

user3142114


2 Answers

MMapDirectory tries to use the OS memory (OS Cache) to the full as much as possible this is normal behaviour, it will try to load the entire index into memory if available. In fact, it is a good thing. Since these memory is available it will try to use it. If another application in the same machine demands more, OS will release it for it. This is one the reason why Solr/Lucene the queries are order of magnitude fast, as most of the call to server ends up memory (depending on the size memory) rather than disk.

JVM memory is a different thing, it can be controlled, only working query response objects and certain cache entries use JVM memory. So JVM size can be configured based on number request and cache entries.

like image 155
Ganesh Avatar answered Sep 18 '22 09:09

Ganesh


what -Xmx value are you using when invoking the jvm? If you are not using an explicit value, the jvm will set one based on the machine features.

Once you give a max amount of heap to Solr, solr will potentially use all of it, if it needs to, that is how it works. If you to limit to say 2GB use -Xmx=2000m when you invoke the jvm. Not sure how large your docs are, but 300k docs would be considered a smallish index.

like image 36
Persimmonium Avatar answered Sep 19 '22 09:09

Persimmonium