Today I did a little quick Benchmark to test speed performance of System.nanoTime()
and System.currentTimeMillis()
:
long startTime = System.nanoTime(); for(int i = 0; i < 1000000; i++) { long test = System.nanoTime(); } long endTime = System.nanoTime(); System.out.println("Total time: "+(endTime-startTime));
This are the results:
System.currentTimeMillis(): average of 12.7836022 / function call System.nanoTime(): average of 34.6395674 / function call
Why are the differences in running speed so big?
Benchmark system:
Java 1.7.0_25 Windows 8 64-bit CPU: AMD FX-6100
This method provides nanosecond precision, but not necessarily nanosecond accuracy. No guarantees are made about how frequently values change. Depending on the system, it can take more than 100 cpu cycles to execute.
nanoTime() is a great function, but one thing it's not: accurate to the nanosecond. The accuracy of your measurement varies widely depending on your operation system, on your hardware and on your Java version. As a rule of thumb, you can expect microsecond resolution (and a lot better on some systems).
System. currentTimeMillis() takes about 29 nanoseconds per call while System.
nanoTime() method returns the current value of the most precise available system timer, in nanoseconds. The value returned represents nanoseconds since some fixed but arbitrary time (in the future, so values may be negative) and provides nanosecond precision, but not necessarily nanosecond accuracy.
From this Oracle blog:
System.currentTimeMillis()
is implemented using the GetSystemTimeAsFileTime method, which essentially just reads the low resolution time-of-day value that Windows maintains. Reading this global variable is naturally very quick - around 6 cycles according to reported information.
System.nanoTime()
is implemented using theQueryPerformanceCounter/ QueryPerformanceFrequency API
(if available, else it returnscurrentTimeMillis*10^6)
.QueryPerformanceCounter(QPC)
is implemented in different ways depending on the hardware it's running on. Typically it will use either the programmable-interval-timer (PIT), or the ACPI power management timer (PMT), or the CPU-level timestamp-counter (TSC). Accessing the PIT/PMT requires execution of slow I/O port instructions and as a result the execution time for QPC is in the order of microseconds. In contrast reading the TSC is on the order of 100 clock cycles (to read the TSC from the chip and convert it to a time value based on the operating frequency).
Perhaps this answer the question. The two methods use different number of clock cycles, thus resulting in slow speed of the later one.
Further in that blog in the conclusion section:
If you are interested in measuring/calculating elapsed time, then always use System.nanoTime(). On most systems it will give a resolution on the order of microseconds. Be aware though, this call can also take microseconds to execute on some platforms.
Most OS's (you didn't mention which one you are using) have an in memory counter/clock which provides millisecond accuracy (or close to that). For nanosecond accuracy most have to read a hardware counter. Communicating with hardware is slower then reading some value already in memory.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With