Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to calculate a operation's time in micro second precision

I want to calculate performance of a function in micro second precision on Windows platform.

Now Windows itself has milisecond granuality, so how can I achieve this.

I tried following sample, but not getting correct results.

LARGE_INTEGER ticksPerSecond = {0};
LARGE_INTEGER tick_1 = {0};
LARGE_INTEGER tick_2 = {0};
double uSec = 1000000;

// Get the frequency
QueryPerformanceFrequency(&ticksPerSecond);

//Calculate per uSec freq
double uFreq = ticksPerSecond.QuadPart/uSec;

// Get counter b4 start of op
QueryPerformanceCounter(&tick_1);

// The ope itself
Sleep(10);

// Get counter after opfinished
QueryPerformanceCounter(&tick_2);

// And now the op time in uSec
double diff = (tick_2.QuadPart/uFreq) - (tick_1.QuadPart/uFreq);
like image 858
Sanjeet Daga Avatar asked May 13 '10 13:05

Sanjeet Daga


3 Answers

Run the operation in a loop a million times or so and divide the result by that number. That way you'll get the average execution time over that many executions. Timing one (or even a hundred) executions of a very fast operation is very unreliable, due to multitasking and whatnot.

like image 80
Matti Virkkunen Avatar answered Nov 15 '22 12:11

Matti Virkkunen


  • compile it
  • look at the assembler output
  • count the number of each instruction in your function
  • apply the cycles per instruction on your target processor
  • end up with a cycle count
  • multiply by the clock speed you are running at
  • apply arbitrary scaling factors to account for cache misses and branch mis-predictions lol

(man I am so going to get down-voted for this answer)

like image 7
vicatcu Avatar answered Nov 15 '22 10:11

vicatcu


No, you are probably getting an accurate result, QueryPerformanceCounter() works well for timing short intervals. What's wrong is the your expectation of the accuracy of Sleep(). It has a resolution of 1 millisecond, its accuracy is far worse. No better than about 15.625 milliseconds on most Windows machine.

To get it anywhere close to 1 millisecond, you'll have to call timeBeginPeriod(1) first. That probably will improve the match, ignoring the jitter you'll get from Windows being a multi-tasking operating system.

like image 3
Hans Passant Avatar answered Nov 15 '22 12:11

Hans Passant