Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Time measuring overhead in Java

When measuring elapsed time on a low level, I have the choice of using any of these:

System.currentTimeMillis(); System.nanoTime(); 

Both methods are implemented native. Before digging into any C code, does anyone know if there is any substantial overhead calling one or the other? I mean, if I don't really care about the extra precision, which one would be expected to be less CPU time consuming?

N.B: I'm using the standard Java 1.6 JDK, but the question may be valid for any JRE...

like image 911
Lukas Eder Avatar asked Apr 12 '11 19:04

Lukas Eder


2 Answers

The answer marked correct on this page is actually not correct. That is not a valid way to write a benchmark because of JVM dead code elimination (DCE), on-stack replacement (OSR), loop unrolling, etc. Only a framework like Oracle's JMH micro-benchmarking framework can measure something like that properly. Read this post if you have any doubts about the validity of such micro benchmarks.

Here is a JMH benchmark for System.currentTimeMillis() vs System.nanoTime():

@BenchmarkMode(Mode.AverageTime) @OutputTimeUnit(TimeUnit.NANOSECONDS) @State(Scope.Benchmark) public class NanoBench {    @Benchmark    public long currentTimeMillis() {       return System.currentTimeMillis();    }     @Benchmark    public long nanoTime() {     return System.nanoTime();    } } 

And here are the results (on an Intel Core i5):

Benchmark                            Mode  Samples      Mean   Mean err    Units c.z.h.b.NanoBench.currentTimeMillis  avgt       16   122.976      1.748    ns/op c.z.h.b.NanoBench.nanoTime           avgt       16   117.948      3.075    ns/op 

Which shows that System.nanoTime() is slightly faster at ~118ns per invocation compared to ~123ns. However, it is also clear that once the mean error is taken into account, there is very little difference between the two. The results are also likely to vary by operating system. But the general takeaway should be that they are essentially equivalent in terms of overhead.

UPDATE 2015/08/25: While this answer is closer to correct that most, using JMH to measure, it is still not correct. Measuring something like System.nanoTime() itself is a special kind of twisted benchmarking. The answer and definitive article is here.

like image 117
brettw Avatar answered Oct 03 '22 23:10

brettw


I don't believe you need to worry about the overhead of either. It's so minimal it's barely measurable itself. Here's a quick micro-benchmark of both:

for (int j = 0; j < 5; j++) {     long time = System.nanoTime();     for (int i = 0; i < 1000000; i++) {         long x = System.currentTimeMillis();     }     System.out.println((System.nanoTime() - time) + "ns per million");      time = System.nanoTime();     for (int i = 0; i < 1000000; i++) {         long x = System.nanoTime();     }     System.out.println((System.nanoTime() - time) + "ns per million");      System.out.println(); } 

And the last result:

14297079ns per million 29206842ns per million 

It does appear that System.currentTimeMillis() is twice as fast as System.nanoTime(). However 29ns is going to be much shorter than anything else you'd be measuring anyhow. I'd go for System.nanoTime() for precision and accuracy since it's not associated with clocks.

like image 42
WhiteFang34 Avatar answered Oct 03 '22 23:10

WhiteFang34