I am using Linux Security Module hooks to add some custom functionality to recv() system call. I want to measure the overhead of this functionality as compared to the pristine recv(). I have written a simple tcp server that I run with and without my module. This tcp server calls a recv() function 'N' number of times. It measures time taken for each recv with something like:
clock_gettime(before);
recv()
clock_gettime(after);
global_time += after - before.
In the end, I print the average time for a single recv() with "global_time/N". Lets call this time as "user_space_avg_recv" time.
Inside my module, I want to place time measurement functions to calculate exact execution time of my hook. I tried 3 methods.
I used jiffies as follows:
sj = jiffies;
my_hook();
ej = jiffies;
current->total_oh = ej - sj;
But I see that there is no difference between sj and ej values. Hence total_oh is unchanged.
I used current_kernel_time() since I thought it returns the time in nanoseconds. However, once again, there was no difference in before and after time.
I used get_cycles. I print the total cycles when the process exits. However, when I convert that total cycles values to milliseconds, it comes out be much greater than "user_space_avg_recv" value. This does not make sense as measured value inside kernel always be lesser than the time value measured from user space. This could mean that I am either not measuring using correct API or I am making a mistake while converting value from cycles to milliseconds.
I am basically using following formula to convert cycles to milliseconds:
avg overhead of my hook in milliseconds =
(((cycles / 2.99) / 10^6) / N)
2.99 because my clock frequency is 2.99Ghz
Some points:
My user space program is tied to a single core using set affinity.
I am using kernel 2.6.22.14
To stop kernel from switching contexts while inside my hook, I use preempt_disable() and preempt_enable(). Thus it will not count the execution times of other kernel threads. Even then, since my hook use some I/O, my thread might release the control voluntarily or some interrupt might occur that might increase the total cycles count.
Question: How can I measure function execution times accurately inside the kernel?
This tcp server calls a recv() function 'N' number of times. It measures time taken for each recv with something like: clock_gettime(before); recv() clock_gettime(after); global_time += after - before. In the end, I print the average time for a single recv() with "global_time/N".
The difference between the end time and start time is the execution time. Get the execution time by subtracting the start time from the end time.
You can use the function tracer API to get a trace of all function calls and returns, with high-precision timestamps. This includes interrupt events and context switches. You can then analyze the resulting trace in userspace to get an accurate idea of how long your function takes to run.
If you can't use the function tracer API, you could call the do_gettimeofday()
call to get a microsecond-resolution timestamp, or getnstimeofday()
for nanosecond resolution. These are the same functions the userspace gettimeofday()
call uses internally. Of course, for very fast functions this may not be sufficient accuracy; any faster accuracy than that and you'll probably need to dig into the timer code to see how it implements cycle conversions. Note also that just because they have high resolution does not mean they have that much accuracy - but they should be useful for benchmarking purposes.
Note that any form of tracing will result in additional latency - do_gettimeofday()
requires a number of atomic compare-and-swap operations, and ftrace puts logging code on every single function pre- and post-amble. You should take this into consideration when interpreting results.
I'm not sure you will obtain the result you want but we use the follwing code to have microseconds.
double Microsecs()
{
static struct timeval _t;
static struct timezone tz;
gettimeofday(&_t, &tz);
return (double)_t.tv_sec + (double)_t.tv_usec/(1000*1000);
}
Than you call it before and after the call you want and see how many time it.
We've been using this method to evaluate IO time monitoring read/write/seek operation in order to oprimize performance and we're having good results.
HTH.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With