Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Comparing times with sub-second accuracy

time.time() * 1000 will give you millisecond accuracy if possible.


int(time.time() * 1000) will do what you want. time.time() generally returns a float value with double precision counting seconds since epoche, so multiplying it does no harm to the precision.

Another word to the misleading answer of @kqr: time.clock() does not give you the time of the epoch. It gives you the time that the process ran on the CPU for Unix, or the time passed since the first call to the function on Windows, see the python docs.

Also it's true that the docs state, that time.time() is not guaranteed to give you ms precision. Though this statement is mainly ment for you to make sure not to rely on this precision on embedded or praehistoric hardware, and I'm not aware of any example, where you actually wouldn't get ms precision.


I see many people suggesting time.time(). While time.time() is an accurate way of measuring the actual time of day, it is not guaranteed to give you millisecond precision! From the documentation:

Note that even though the time is always returned as a floating point number, not all systems provide time with a better precision than 1 second. While this function normally returns non-decreasing values, it can return a lower value than a previous call if the system clock has been set back between the two calls.

This is not the procedure you want when comparing two times! It can blow up in so many interesting ways without you being able to tell what happened. In fact, when comparing two times, you don't really need to know what time of day it is, only that the two values have the same starting point. For this, the time library gives you another procedure: time.clock(). The documentation says:

On Unix, return the current processor time as a floating point number expressed in seconds. The precision, and in fact the very definition of the meaning of “processor time”, depends on that of the C function of the same name, but in any case, this is the function to use for benchmarking Python or timing algorithms.

On Windows, this function returns wall-clock seconds elapsed since the first call to this function, as a floating point number, based on the Win32 function QueryPerformanceCounter(). The resolution is typically better than one microsecond.

Use time.clock().


Or if you just want to test how fast your code is running, you could make it convenient for yourself and use timeit.timeit() which does all of the measuring for you and is the de facto standard way of measuring elapsed time in code execution.


Using datetime:

>>> import datetime
>>> delta = datetime.datetime.utcnow() - datetime.datetime(1970, 1, 1)
>>> delta
datetime.timedelta(15928, 52912, 55000)
>>> delta.total_seconds()
1376232112.055
>>> delta.days, delta.seconds, delta.microseconds
(15928, 52912, 55000)

Python 3.7 introduced time.time_ns() to finally solve this problem since time.time() as mentioned is not useful for this.

"returns time as an integer number of nanoseconds since the epoch."

https://www.python.org/dev/peps/pep-0564/
https://docs.python.org/3/library/time.html#time.time_ns