The documentation for System.nanoTime()
says the following (emphasis mine).
This method can only be used to measure elapsed time and is not related to any other notion of system or wall-clock time. The value returned represents nanoseconds since some fixed but arbitrary time (perhaps in the future, so values may be negative). This method provides nanosecond precision, but not necessarily nanosecond accuracy. No guarantees are made about how frequently values change.
As I see it, this can be interpreted in two different ways:
The sentence in bold above refers to individual return values. Then, precision and accuracy are to be understood in the numerical sense. That is, precision refers to the number of significant digits - the position of truncation, and accuracy is whether the number is the correct one (such as described in the top answer here What is the difference between 'precision' and 'accuracy'? )
The sentence in bold above refers to the capability of the method itself. Then, precision and accuracy are to be understood as illustrated by the dartboard analogy ( http://en.wikipedia.org/wiki/Precision_vs._accuracy#Accuracy_versus_precision:_the_target_analogy ). So, low accuracy, high precision => the wrong value is repeatedly hit with a high precision: imagining that physical time stands still, consecutive calls of nanoTime() returns the same numerical value, but it is off from the actual elapsed time since the reference time by some constant offset.
Which interpretation is the correct one? My point is, interpretation 2 would mean that a measure of time difference using nanoTime() (by subtracting two return values) would be correct to the nanosecond (since the constant error/offset in the measurement would be eliminated), while interpretation 1 wouldn't guarantee that kind of compliance between measurements and thus wouldn't necessarily imply a high precision of time difference measurements.
Updated 4/15/13: The Java 7 documentation for System.nanoTime()
has been updated to address the possible confusion with the previous wording.
Returns the current value of the running Java Virtual Machine's high-resolution time source, in nanoseconds.
This method can only be used to measure elapsed time and is not related to any other notion of system or wall-clock time. The value returned represents nanoseconds since some fixed but arbitrary origin time (perhaps in the future, so values may be negative). The same origin is used by all invocations of this method in an instance of a Java virtual machine; other virtual machine instances are likely to use a different origin.
This method provides nanosecond precision, but not necessarily nanosecond resolution (that is, how frequently the value changes) - no guarantees are made except that the resolution is at least as good as that of
currentTimeMillis()
.Differences in successive calls that span greater than approximately 292 years (263 nanoseconds) will not correctly compute elapsed time due to numerical overflow.
The values returned by this method become meaningful only when the difference between two such values, obtained within the same instance of a Java virtual machine, is computed.
nanoTime() is a great function, but one thing it's not: accurate to the nanosecond. The accuracy of your measurement varies widely depending on your operation system, on your hardware and on your Java version. As a rule of thumb, you can expect microsecond resolution (and a lot better on some systems).
nanoTime. public static long nanoTime() Returns the current value of the running Java Virtual Machine's high-resolution time source, in nanoseconds. This method can only be used to measure elapsed time and is not related to any other notion of system or wall-clock time.
Regarding accuracy, you are almost correct. On SOME Windows machines, currentTimeMillis() has a resolution of about 10ms (not 50ms).
No, there is no guarantee that every call to System. nanoTime() will return a unique value.
In Clojure command line, I get:
user=> (- (System/nanoTime) (System/nanoTime)) 0 user=> (- (System/nanoTime) (System/nanoTime)) 0 user=> (- (System/nanoTime) (System/nanoTime)) -641 user=> (- (System/nanoTime) (System/nanoTime)) 0 user=> (- (System/nanoTime) (System/nanoTime)) -642 user=> (- (System/nanoTime) (System/nanoTime)) -641 user=> (- (System/nanoTime) (System/nanoTime)) -641
So essentially, nanoTime
doesn't get updated every nanosecond, contrary to what one might intuitively expect from its precision. In Windows systems, it's using the QueryPerformanceCounter
API under the hood (according to this article), which in practice seems to give about 640 ns resolution (in my system!).
Note that nanoTime
can't, by itself, have any accuracy at all, since its absolute value is arbitrary. Only the difference between successive nanoTime
calls is meaningful. The (in)accuracy of that difference is in the ballpark of 1 microsecond.
The first interpretation is correct. On most systems the three least-significant digits will always be zero. This in effect gives microsecond accuracy, but reports it at the fixed precision level of a nanosecond.
In fact, now that I look at it again, your second interpretation is also a valid description of what is going on, maybe even more so. Imagining freezed time, the report will be always the same wrong number of nanoseconds, but correct if understood as the integer number of microseconds.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With