Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Python's time.clock() vs. time.time() accuracy?

Tags:

python

time

Which is better to use for timing in Python? time.clock() or time.time()? Which one provides more accuracy?

for example:

start = time.clock() ... do something elapsed = (time.clock() - start) 

vs.

start = time.time() ... do something elapsed = (time.time() - start) 
like image 522
Corey Goldberg Avatar asked Sep 17 '08 17:09

Corey Goldberg


People also ask

How accurate is python time time?

Output. Note that different systems will have different accuracy based on their internal clock setup (ticks per second). but it's generally at least under 20ms.

What does time clock () do in python?

Python time clock() Method Pythom time method clock() returns the current processor time as a floating point number expressed in seconds on Unix. The precision depends on that of the C function of the same name, but in any case, this is the function to use for benchmarking Python or timing algorithms.

What does the time time () function exactly measure?

time. time() shows that the wall-clock time has passed approximately one second while time. clock() shows the CPU time spent on the current process is less than 1 microsecond. time.


2 Answers

As of 3.3, time.clock() is deprecated, and it's suggested to use time.process_time() or time.perf_counter() instead.

Previously in 2.7, according to the time module docs:

time.clock()

On Unix, return the current processor time as a floating point number expressed in seconds. The precision, and in fact the very definition of the meaning of “processor time”, depends on that of the C function of the same name, but in any case, this is the function to use for benchmarking Python or timing algorithms.

On Windows, this function returns wall-clock seconds elapsed since the first call to this function, as a floating point number, based on the Win32 function QueryPerformanceCounter(). The resolution is typically better than one microsecond.

Additionally, there is the timeit module for benchmarking code snippets.

like image 71
Jason Navarrete Avatar answered Sep 21 '22 19:09

Jason Navarrete


The short answer is: most of the time time.clock() will be better. However, if you're timing some hardware (for example some algorithm you put in the GPU), then time.clock() will get rid of this time and time.time() is the only solution left.

Note: whatever the method used, the timing will depend on factors you cannot control (when will the process switch, how often, ...), this is worse with time.time() but exists also with time.clock(), so you should never run one timing test only, but always run a series of test and look at mean/variance of the times.

like image 34
PierreBdR Avatar answered Sep 23 '22 19:09

PierreBdR