Background info
I recently handed in an assigment for my class on algorithms and datastructures. The assignment was to implement a solution to find the maximum-subarray of randomly generated arrays. We were asked to implement both a brute force algorithm, and a recursive divide-and-conquer algorithm.
We were then asked to analyze the running times, to see at which problem size the brute force algorithm would be faster than the recursive solution. This was done by measuring running time (Using System.nanoTime() measurements) of both algorithms for increasing problem sizes.
However, determining this turned out to be a bit trickier than I expected.
Curiosity
If I start off by running both of the algorithms with problems sizes of 5000, more than 10 times, the running time for the recursive algorithm drops, from one run to the next, by a factor of about 10 (from ~1800µS to execute, to ~200µS to execute) and it stays that much faster for the rest of the iterations. See picture below for an example
The 2nd and 3rd column is just to verify that both algorithms return the correct maximum subarray
This was tested on OS X 10.7.3 with Java 1.6.0_29 - the results were the same when executed on a PC running Windows 7 and Java 1.6 (exact version number unknown).
The source code for the program can be found here: https://gist.github.com/2274983
My question is this: What causes the algorithm to suddenly perform that much better after being "warmed up"?
The amount of memory needed to hold the code for the algorithm. The amount of memory needed for the input data. The amount of memory needed for any output data. Some algorithms, such as sorting, often rearrange the input data and don't need any additional space for output data.
Performance analysis of an algorithm depends upon two factors i.e. amount of memory used and amount of compute time consumed on any CPU. Formally they are notified as complexities in terms of: Space Complexity. Time Complexity.
The two main measures for the efficiency of an algorithm are time complexity and space complexity, but they cannot be compared directly. So, time and space complexity is considered for algorithmic efficiency. An algorithm must be analyzed to determine the resource usage of the algorithm.
The commenters already pointed out that the JIT is likely causing this behavior, but it seems that the OP doesn't know what that is. So just to explain briefly:
Your Java Virtual Machine can run a program in 2 ways:
Interpreting the Java bytecode. Basically, the interpreter "walks" over the bytecodes one by one, checks what each one is, and performs the corresponding action.
Converting the bytecode to machine code, which the underlying CPU can run directly. This is called "Just-in-time compilation" or JIT.
Programs which have been JIT'd to machine code run far faster, but compilation takes time, which could make program start-up slower. So your JVM makes a compromise: initially it just interprets the bytecode, but if a certain method is executed many times, it JIT compiles that individual method only. Generally only a small part of the program code will be executed many times (inner loops, etc.) so this strategy is effective.
The upshot of this is that when you are performance-testing Java code, you must first "warm up" the JVM by running your code in a loop enough times that the performance-critical methods are all JIT compiled.
In this case, your recursive solution seems to benefit a lot more from JIT compilation than the brute force solution. This could indicate that the JIT compiler is finding some optimization which greatly benefits the recursive solution -- perhaps converting those recursive calls to iterative code?
One suggestion, without reading any line of your code, is when you "warm up" your application, you get your VM to some amount of memory that is fixed for your applcation.
For example, lets say your 5000 array entities to a ArrayList- one by one. Array list start with a fixed size and when it's reach it's limit it double it's size and copy the old array to the new one. If you reuse this ArrayList- in the second run this list will be in the perfect size and work faster.
This situation can happen in some other places.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With