The code below will allocate a large size of direct memory but do not cause java.lang.OutOfMemoryError: Direct buffer memory :
//JVM args: -Xms10m -Xmx10m -XX:MaxDirectMemorySize=10m
public class DirectMemoryOOM {
public static void main(String[] args) throws NoSuchFieldException, IllegalAccessException {
Field f = Unsafe.class.getDeclaredFields()[0];
f.setAccessible(true);
Unsafe us = (Unsafe) f.get(null);
long size = 1024 * 1024 * 1024;
while (true) {
long p = us.allocateMemory(size);
for (int i = 0; i < size; i++) {
us.putByte(p + i, Byte.MAX_VALUE);
}
}
}
}
But the code following code will get java.lang.OutOfMemoryError: Direct buffer memory. I have seen the answer from Java unsafe memory allocation limit, but ByteBuffer.allocateDirect is implemented using Unsafe.allocateMemory()
//JVM args: -Xms10m -Xmx10m -XX:MaxDirectMemorySize=10m
public class DirectMemoryOOM {
public static void main(String[] args) throws NoSuchFieldException, IllegalAccessException {
int size = 1024 * 1024;
System.out.println(sun.misc.VM.maxDirectMemory());
while (true) {
ByteBuffer.allocateDirect(size);
}
}
}
Why does the limit fail happens to the first one?
This Oracle HotSpot option sets a limit on the amount of memory that can be reserved for all Direct Byte Buffers.
OutOfMemoryError: Direct buffer memory is increasing JVM default memory limit. By default, JVM allows 64MB for direct buffer memory, you can increase it by using JVM option -XX:MaxDirectMemorySize=512m. That's all on How to fix java. lang.
MaxDirectMemorySize. This JVM option specifies the maximum total size of java. nio (New I/O package) direct buffer allocations. It is used with network data transfer and serialization activity. The default value for direct memory buffers depends on your version of your JVM.
The direct buffer memory is the OS' native memory, which is used by the JVM process, not in the JVM heap. It is used by Java NIO to quickly write data to network or disk; no need to copy between JVM heap and native memory.
As the original answer says: Unsafe.allocateMemory()
is a wrapper around os::malloc
which doesn't care about any memory limits imposed by the VM.
ByteBuffer.allocateDirect()
will call this method but before that, it will call Bits.reserveMemory()
(In my version of Java 7: DirectByteBuffer.java:123
) which checks the memory usage of the process and throws the exception which you mention.
The error comes from Bits.reserveMemory
which is called before the unsafe.allocateMemory(size)
when calling allocateDirect
.
The reserveMemory
method procceed this validation :
synchronized (Bits.class) {
if (totalCapacity + cap > maxMemory)
throw new OutOfMemoryError("Direct buffer memory");
reservedMemory += size;
totalCapacity += cap;
count++;
}
The error is thrown if the desired allocation is higher then the maxMemory
retrieved from
maxMemory = VM.maxDirectMemory();
Calling allocateMemory
directly will proceed native method and won't validate the max capacity (that explain why you don't get the error in your first snippet) which is the main goal of the --XX:MaxDirectMemorySize
as explained in this comment in reserveMemory
// -XX:MaxDirectMemorySize limits the total capacity rather than the
// actual memory usage, which will differ when buffers are page
// aligned.
if (cap <= maxMemory - totalCapacity) {
reservedMemory += size;
totalCapacity += cap;
count++;
return;
}
Worth to mention that your first snippet implementation is not a good practice. A comment in Bits.java
specify that reserveMemory
should always be called whenever direct memory is allocated :
// These methods should be called whenever direct memory is allocated or
// freed. They allow the user to control the amount of direct memory
// which a process may access. All sizes are specified in bytes.
static void reserveMemory(long size, int cap) {
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With