I have read some answers for this question(Why I can't create an array with large size? and https://bugs.openjdk.java.net/browse/JDK-8029587) and I don't understand the following. "In the GC code we pass around the size of objects in words as an int." As I know the size of a word in JVM is 4 bytes. According to this, if we pass around the size of long array of large size (for example, MAX_INT - 5) in words as an int, we must get OutOfMemoryException with Requested array size exceeds VM limit because the size is too large for int even without size of header. So why arrays of different types have the same limit on max count of elements?
Only addressing the why arrays of different types have the same limit on max count of elements? part:
Because it doesn't matter to much in practical reality; but allows the code implementing the JVM to be simpler.
When there is only one limit; that is the same for all kinds of arrays; then you can deal all arrays with that code. Instead of having a lot of type-specific code.
And given the fact that the people that need "large" arrays can still create them; and only those that need really really large arrays are impacted; why spent that effort?
The answer is in the jdk sources as far as I can tell (I'm looking at jdk-9); also after writing it I am not sure if it should be a comment instead (and if it answers your question), but it's too long for a comment...
First the error is thrown from hotspot/src/share/vm/oops/arrayKlass.cpp
here:
if (length > arrayOopDesc::max_array_length(T_ARRAY)) {
report_java_out_of_memory("Requested array size exceeds VM limit");
....
}
Now, T_ARRAY
is actually an enum of type BasicType
that looks like this:
public static final BasicType T_ARRAY = new BasicType(tArray);
// tArray is an int with value = 13
That is the first indication that when computing the maximum size, jdk does not care what that array will hold (the T_ARRAY
does not specify what types will that array hold).
Now the method that actually validates the maximum array size looks like this:
static int32_t max_array_length(BasicType type) {
assert(type >= 0 && type < T_CONFLICT, "wrong type");
assert(type2aelembytes(type) != 0, "wrong type");
const size_t max_element_words_per_size_t =
align_size_down((SIZE_MAX/HeapWordSize - header_size(type)), MinObjAlignment);
const size_t max_elements_per_size_t =
HeapWordSize * max_element_words_per_size_t / type2aelembytes(type);
if ((size_t)max_jint < max_elements_per_size_t) {
// It should be ok to return max_jint here, but parts of the code
// (CollectedHeap, Klass::oop_oop_iterate(), and more) uses an int for
// passing around the size (in words) of an object. So, we need to avoid
// overflowing an int when we add the header. See CRs 4718400 and 7110613.
return align_size_down(max_jint - header_size(type), MinObjAlignment);
}
return (int32_t)max_elements_per_size_t;
}
I did not dive too much into the code, but it is based on HeapWordSize
; which is 8 bytes at least
. here is a good reference (I tried to look it up into the code itself, but there are too many references to it).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With