In the recent 10 year when discussing java and/or garbage collection, the only performance penalty that I have not been able to defend is that garbage collection algorithms more or less breaks when running in a paged memory architecture, and parts of the heap is getting paged out.
Unix systems (and especially Linux) agressively pages out memory that has not been touched for a while, and while that is good for your average leaking c application, it kill javas perfomance in memory tight situations.
I know the best practice is to keep the max heap less than the physical memory. (Or you will see your application swap to death) but the idea - at least in the unix world, is that the memory could be better spent for filesystem caches etc.
My question is: Are there any paging (aware) garbage collecting algorithms?
Java garbage collection is the process by which Java programs perform automatic memory management. Java programs compile to bytecode that can be run on a Java Virtual Machine, or JVM for short. When Java programs run on the JVM, objects are created on the heap, which is a portion of memory dedicated to the program.
In Java, memory management is the process of allocation and de-allocation of objects, called Memory management. Java does memory management automatically. Java uses an automatic memory management system called a garbage collector. Thus, we are not required to implement memory management logic in our application.
Java objects reside in an area called the heap. The heap is created when the JVM starts up and may increase or decrease in size while the application runs. When the heap becomes full, garbage is collected. During the garbage collection objects that are no longer used are cleared, thus making space for new objects.
The finalize() method is called by Garbage Collector, not JVM. However, Garbage Collector is one of the modules of JVM. Object class finalize() method has an empty implementation. Thus, it is recommended to override the finalize() method to dispose of system resources or perform other cleanups.
I'm going to contend that this is not as big an issue as you think.
To make sure that we're describing the same thing: a complete collection requires the JVM to walk the object graph to identify every reachable object; the ones left over are garbage. While doing so, it will touch every page in the application heap, which will cause every page to be faulted into memory if it's been swapped out.
I think that's a non-concern for several reasons: First, because modern JVMs use generational collectors, and most objects never make their way out of the young generations, which are almost guaranteed to be in the resident set.
Second, because the objects that move out of the young generation still tend to be accessed frequently, which again means they should be in the resident set. This is a more tenuous argument, and there are in fact lots of cases where long-lived objects won't get touched except by the GC (one reason that I don't believe in memory-limited caches).
The third reason (and there may be more) is because the JVM (at least, the Sun JVM) uses a mark-sweep-compact collector. So after GC, the active objects in the heap occupy a smaller number of pages, again increasing the RSS. This, incidentally, is the main driver for Swing apps to explicitly call System.gc() when they're minimized: by compacting the heap, there's less to swap in when they get maximized again.
Also, recognize that heap fragmentation of C/C++ objects can get extreme, and young objects will be sprinkled amongst older, so the RSS has to be larger.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With