Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How can I measure the speed of code written in Java? (AI algorithms)

How can I measure the speed of code written in Java?

I planning to develop software which will solve Sudoku using all presently available AI and ML algorithms and compare time against simple brute-force method. I need to measure time of each algorithm, I would like to ask for suggestions on what is the best way of doing that? Very important, program must be useful on any machine regardless to CPU power/memory.

Thank you.

like image 431
Registered User Avatar asked Mar 08 '10 19:03

Registered User


1 Answers

As suggested by others, System.currentTimeMillis() is quite good, but note the following caveats:

  • System.currentTimeMillis() measures elapsed physical time ("wall clock time"), not CPU time. If other applications are running on the machine, your code will get less CPU and its speed will decrease. So, bench only on otherwise idle systems.
  • Similarly, a multi-threaded application on a multicore system may get extra, hidden CPU. The elapsed time measure does not capture the whole of the complexity of multi-threaded applications.
  • Java needs a bit of "warm-up". The VM will first interpret code (which is slow), and, if a given method is used too many times, then the JIT compiler will translate the method to native code. Only at that point will the method reach its top speed. I recommend that you perform a few "empty loops" before calling System.currentTimeMillis().
  • Accuracy of System.currentTimeMillis() is rarely of 1 ms. On many systems, accuracy is no better than 10 ms, or even more. Also, the JVM will sometimes run the GC, inducing noticeable pauses. I suggest that you organize your measure in a loop and insist upon running for at least a few seconds.

This yields the following code:

for (int i = 0; i < 10; i ++) {
    runMethod();
}
int count = 10;
for (;;) {
    long begin = System.currentTimeMillis();
    for (int i = 0; i < count; i ++)
        runMethod();
    long end = System.currentTimeMillis();
    if ((end - begin) < 10000) {
        count *= 2;
        continue;
    }
    reportElapsedTime((double)(end - begin) / count);
}

As you see, there is first ten "empty" runs. Then the program runs the method in a loop, as many times as necessary so that the loop takes at least ten seconds. Ten seconds ought to be enough to smooth out GC runs and other system inaccuracies. When I bench hash function implementations, I use two seconds, and even though the function itself triggers no memory allocation at all, I still get variations of up to 3%.

like image 131
Thomas Pornin Avatar answered Sep 24 '22 06:09

Thomas Pornin