Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Using gprof with sockets

I have a program I want to profile with gprof. The problem (seemingly) is that it uses sockets. So I get things like this:

::select(): Interrupted system call

I hit this problem a while back, gave up, and moved on. But I would really like to be able to profile my code, using gprof if possible. What can I do? Is there a gprof option I'm missing? A socket option? Is gprof totally useless in the presence of these types of system calls? If so, is there a viable alternative?

EDIT: Platform:

  • Linux 2.6 (x64)
  • GCC 4.4.1
  • gprof 2.19
like image 867
Chris Tonkinson Avatar asked Jun 02 '10 12:06

Chris Tonkinson


2 Answers

The socket code needs to handle interrupted system calls regardless of profiler, but under profiler it's unavoidable. This means having code like.

if ( errno == EINTR ) { ...

after each system call.

Take a look, for example, here for the background.

like image 90
Nikolai Fetissov Avatar answered Nov 02 '22 22:11

Nikolai Fetissov


gprof (here's the paper) is reliable, but it only was ever intended to measure changes, and even for that, it only measures CPU-bound issues. It was never advertised to be useful for locating problems. That is an idea that other people layered on top of it.

Consider this method.

Another good option, if you don't mind spending some money, is Zoom.

Added: If I can just give you an example. Suppose you have a call-hierarchy where Main calls A some number of times, A calls B some number of times, B calls C some number of times, and C waits for some I/O with a socket or file, and that's basically all the program does. Now, further suppose that the number of times each routine calls the next one down is 25% more times than it really needs to. Since 1.25^3 is about 2, that means the entire program takes twice as long to run as it really needs to.

In the first place, since all the time is spent waiting for I/O gprof will tell you nothing about how that time is spent, because it only looks at "running" time.

Second, suppose (just for argument) it did count the I/O time. It could give you a call graph, basically saying that each routine takes 100% of the time. What does that tell you? Nothing more than you already know.

However, if you take a small number of stack samples, you will see on every one of them the lines of code where each routine calls the next. In other words, it's not just giving you a rough percentage time estimate, it is pointing you at specific lines of code that are costly. You can look at each line of code and ask if there is a way to do it fewer times. Assuming you do this, you will get the factor of 2 speedup.

People get big factors this way. In my experience, the number of call levels can easily be 30 or more. Every call seems necessary, until you ask if it can be avoided. Even small numbers of avoidable calls can have a huge effect over that many layers.

like image 1
Mike Dunlavey Avatar answered Nov 03 '22 00:11

Mike Dunlavey