I am constructing MongoClient Connection in the below manner :
public static synchronized MongoClient getInstance(String mongoDbUri) {
try {
// Standard URI format: mongodb://[dbuser:dbpassword@]host:port/dbname
if( mongoClient == null ){
mongoClient = new MongoClient(
new MongoClientURI(mongoDbUri));
}
} catch (Exception e) {
log.error(
"Error mongo connection : ",
e.getCause());
}
return mongoClient;
}
Over a period of time when multiple transaction is run I am seeing some memory eat up with the application which is not getting released.
When analysed the heap dump saw that there was memory consumption was maximum with the class
com.mongodb.internal.connection.PowerOfTwoBufferPool
The mongo client is trying to connect to a mongos instance.The application has 3 replica sets on 3 shards and one config server to hold the metadata.
To add more details to the same , I have a spring managed bean annotated with @Component.There is an annotation with @PostConstruct for the bean in which the above method is called.In the spring class we are doing insert/update/create
using the Mongo Client.
Thanks.
The PowerOfTwoBufferPool is actually a is a cache, so this may look like a memory leak at the first look.
A mongodb-user group reply has details:
... this behaviour is expected since
PowerOfTwoBufferPool
is a cache. Hence it may look like a leak.In short, the
PowerOfTwoBufferPool
holds a number of pools of ByteBuffer instances, each pool containing a set of equal-sized buffers. The smallest size is 1K, and the largest is 16MB, incrementing in power of two sizes from 1K to 16MB. The size of each pool of equal-sized buffers is not limited, and is determined by application usage. Once a buffer is cached in a pool, it remains pooled (or in-use) until the MongoClient is closed. As a result, it’s totally expected that during introspection of JVM state, the contents of the pools would show as a leak suspect, just as any cache would.The
PowerOfTwoBufferPool
exists in order to reduce GC load. It’s fairly well known that modern garbage collectors in the JVM treat large allocations differently from smaller ones, so if an application were to not do any pooling of large objects (like these buffers) it will have the effect of increasing GC load, because the garbage collector has to do more work collecting these large objects than it does smaller ones. The cost of this is that the driver holds on to memory that could be used by other parts of the application. In particular, it holds on to enough memory to handle the largest peak load seen so far by the application.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With