Suppose I have the following code:
public void process() {
byte[] data = new byte[size];
... // code that uses the above data
longProcess(); // a very long running process that does not use the data.
}
Assuming that the data is not referenced anywhere else in the program, is the JVM smart enough to allow the data to be garbage collected while the long process is still running?
If not, will adding
data = null;
before the long process allow this to happen?
This depends on the JVM. The versions of Oracle's JVM that I've tried (1.6.0_41 and 1.7.0_09) don't perform this optimization by default. However, 1.7.0_09 does perform it when aggressive optimizations are turned on.
Here's is the test I've conducted:
public class Main {
public static int g() {
int n = 100000;
int arr[][] = new int[n][];
for (int i = 0; i < n; ++i) {
try {
arr[i] = new int[100000];
} catch (OutOfMemoryError ex) {
return i;
}
}
return -1;
}
public static void f1() {
int arr[] = new int[1000000];
System.out.println(g());
}
public static void f2() {
int arr[] = new int[1000000];
arr = null;
System.out.println(g());
}
public static void main(String[] argv) {
for (int j = 0; j < 2; ++j) {
for (int i = 0; i < 10; ++i) {
f1();
}
System.out.println("-----");
for (int i = 0; i < 10; ++i) {
f2();
}
System.out.println("-----");
}
}
}
Using JVM 1.7 with default settings, f1()
consistently runs out of memory after 3195 iterations, whereas f2()
consistently manages 3205 iterations.
The picture changes if the code is run using Java 1.7.0_09 with -XX:+AggressiveOpts -XX:CompileThreshold=1
: both versions can do 3205 iterations, indicating that HotSpot does perform this optimization in this case. Java 1.6.0_41 doesn't appear to do this.
In my testing, restricting the scope of the array has the same effect as setting the reference null
, and should probably be the preferred choice if you feel you ought to help the JVM collect the array asap.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With