I need to profile a program to see whether any changes need to be made regarding performance. I suspect there is a need, but measuring first is the way to go. This is not that program, but it illustrates the problem I'm having:
#include <stdio.h>
int main (int argc, char** argv)
{
FILE* fp = fopen ("trivial.c", "r");
if (fp)
{
char line[80];
while (fgets (line, 80, fp))
printf (line);
fclose (fp);
}
return 0;
}
Here's what I did with it:
% gcc trivial.c -pg -o trivial
% ./trivial
...
% gprof trivial gmon.out
Granted, this is a trivial program, but I would have thought it would make some kind of blip on the profiling radar. It didn't:
called/total parents
index %time self descendents called+self name index
called/total children
0.00 0.00 1/1 __start [1704]
[105] 0.0 0.00 0.00 1 _main [105]
-----------------------------------------------
% cumulative self self total
time seconds seconds calls ms/call ms/call name
0.0 0.00 0.00 1 0.00 0.00 _main [105]
Index by function name
[105] _main
Can anyone guide me here? I would like the output to reflect that it called fgets and printf at least 14 times, and it did hit the disk after all - there should be some measured time, surely.
When I run the same command on the real program, I get more functions listed, but even then it is not a complete list - just a sample.
Perhaps gprof is not the right tool to use. What is?
This is on OS X Leopard.
Edit: I ran the real program and got this:
% time real_program
real 4m24.107s
user 2m34.630s
sys 0m38.716s
I think that you could try various Valgrind tools, especially callgrind
(used to get call counts and inclusive cost for each call happening in your program).
There are various nice visualisation tools for the valgrind output. I don't know about particular tools for OS X though.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With