From time to time I encounter mentions of System.nanoTime()
being a lot slower (the call could cost up to microseconds) than System.currentTimeMillis()
, but prooflinks are often outdated, or lead to some fairly opinionated blog posts that can't be really trusted, or contain information pertaining to specific platform, or this, or that and so on.
I didn't run benchmarks since I'm being realistic about my ability to conduct an experiment concerning such a sensitive matter, but my conditions are really well-defined, so I'm expecting quite a simple answer.
So, on an average 64-bit Linux (implying 64-bit JRE), Java 8 and a modern hardware, will switching to nanoTime()
cost me that microseconds to call? Should I stay with currentTimeMillis()
?
As always, it depends on what you're using it for. Since others are bashing nanoTime
, I'll put a plug in for it. I exclusively use nanoTime
to measure elapsed time in production code.
I shy away from currentTimeMillis
in production because I typically need a clock that doesn't jump backwards and forwards around like the wall clock can (and does). This is critical in my systems which use important timer-based decisions. nanoTime
should be monotonically increasing at the rate you'd expect.
In fact, one of my co-workers says "currentTimeMillis
is only useful for human entertainment," (such as the time in debug logs, or displayed on a website) because it cannot be trusted to measure elapsed time.
But really, we try not to use time as much as possible, and attempt to keep time out of our protocols; then we try to use logical clocks; and finally if absolutely necessary, we use durations based on nanoTime
.
Update: There is one place where we use currentTimeMillis
as a sanity check when connecting two hosts, but we're checking if the hosts' clocks are more than 5 minutes apart.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With