I'm writing software that requires timestamps in microsecond resolution or better.
I'm planning on using System.currentTimeMillis
in combination with System.nanoTime
sort of like this, though it's just a rough code sketch:
private static final long absoluteTime = (System.currentTimeMillis() * 1000 * 1000);
private static final long relativeTime = System.nanoTime();
public long getTime()
{
final long delta = System.nanoTime() - relativeTime;
if (delta < 0) throw new IllegalStateException("time delta is negative");
return absoluteTime + delta;
}
The documentation for nanoTime
says:
This method provides nanosecond precision, but not necessarily nanosecond resolution (that is, how frequently the value changes) - no guarantees are made except that the resolution is at least as good as that of
currentTimeMillis()
.
so it hasn't given us a guarantee of a resolution any better than milliseconds.
Going a little deeper, under the hood of nanoTime
(which is predictably a native method):
Windows uses the QueryPerformanceCounter
API which promises a
resolution of less than one microsecond which is great.
Linux uses clock_gettime
with a flag to ensure the value is
monotonic but makes no promises about resolution.
Solaris is similar to Linux
The source doesn't mention how OSX or Unix-based OSs deal with this.
(source)
I've seen a couple of vague allusions to the fact it will "usually" have microsecond resolution, such as this answer on another question:
On most systems the three least-significant digits will always be zero. This in effect gives microsecond accuracy, but reports it at the fixed precision level of a nanosecond.
but there's no source and the word "usually" is very subjective.
Question: Under what circumstances might nanoTime
return a value whose resolution is worse than microseconds? For example, perhaps a major OS release doesn't support it, or a particular hardware feature is required which may be absent. Please try to provide sources if you can.
I'm using Java 1.6 but there's a small chance I could upgrade if there were substantial benefits with regards to this problem.
nanoTime() rounds up the values with 1000 ns accuracy (e.g. 1000, 3000, ...). On windows xp it returns more accurate values (e.g. 2345, 6789).
currentTimeMillis() actually give the time accurate to the nearest millisecond on Linux, Mac OS and Windows (and since which versions - I know, for example, that Windows only used to be accurate to the nearest 15/16 milliseconds).
nanoTime. Returns the current value of the running Java Virtual Machine's high-resolution time source, in nanoseconds. This method can only be used to measure elapsed time and is not related to any other notion of system or wall-clock time.
It is thread safe.
Question: Under what circumstances might nanoTime return a value whose resolution is worse than microseconds? What operating systems, hardware, JVMs etc. that are somewhat commonly used might this affect? Please try to provide sources if you can.
Asking for an exhaustive list of all possible circumstances under which that constraint will be violated seems a bit much, nobody knows under which environments your software will run. But to prove that it can happen see this blog post by aleksey shipilev, where he describes a case where nanotime becomes less accurate (in terms of its own latency) than a microsecond on windows machines, due to contention.
Another case would be the software running under a VM that emulates hardware clocks in a very coarse manner.
The specification has been left intentionally vague exactly due to platform and hardware-specific behaviors.
You can "reasonably expect" microsecond precision once you have verified that the hardware and operating system you're using do provide what you need and that VMs pass through the necessary features.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With