I am trying to optimize our gettimeofday() system call on Redhat Linux. By their documentation it can be speed by running the call in the user land using virtual dynamic shared object (VDSO). I was curious how can I mesure the speed of the call in the first place? I would like to make the change and then compare it against my previous results
Pseudocode:
gettimeofday()
and save result in a
gettimeofday()
a million timesgettimeofday()
and save result in b
(b-a)/1,000,000
Rationale: The two bounding calls to gettimeofday()
should not make much of an impact on the loop. It may feel strange to call the function that you want to time but that's OK.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With