Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Profiling a C++ project in terms of execution time

I need some help in terms of profiling existing code in terms of execution time. The intention is to speed it up.

I've been given some code that was worked on previously. It is completely written in C++ with OO concepts. There is a GUI based interface to it, and selecting a certain option runs selected peiece of code. (There are about 11 classes as part of the project).

I want to be able to press a GUI option and let the code run and generate a resource map like:

Functions of Class 1 = 20% of execution time
Functions of Class 2 = 60% of execution time
Functions of Class 3 = 10% of execution time
Functions of Class 4 = 10% of execution time

That way, I know which class is taking up the most time and then know which to work on and improve. However, I have no idea how to go about doing this. I only have basic C++ knowledge.

I did read this post: find c++ execution time, however since the program is not serial. One class calls another, and that calls another, I don't know how system clock/ticks would be implementable?

I read of program like Valgrind, Zoom, Poor Man's Profiler etc. but honestly have no idea on integrating it with the code. Is there so simpler method?

I also read this method: How can I profile C++ code running in Linux?, however I don't see how I could get pin-pointed information as to class based information (Class 1, Class 2 etc.)

Could someone please advise for a newbie?

like image 319
c0d3rz Avatar asked Jun 13 '12 00:06

c0d3rz


2 Answers

Valgrind (subtool callgrind) is pretty simple to use.You just need to make sure that sufficient debugging info is compiled/linked into your program so that callgrind can find the names of the various functions that are getting called. Then instead of calling your program directly, pass it (and its arguments) as parameters to valgrind, like:

valgrind --tool=callgrind --trace-children=yes <myprogram> <myprogram_args>

(--trace-children is there in case your real executable is hiding behind some layer or layers of wrapper scripts)

Note that your program will run much more slowly (like 100x slower) because every single function entrypoint is being traced.

Various tools exist to explore callgrind's output, notably kcachegrind/qcachegrid.

Alternatively, you could measure system clock ticks for some small number of high-level functions (so you see "time taken by function X and everything underneath it") and progress down through your code as you find hotspots.

Something like this (conceptually, needs to be properly organized into headers / sources):

struct FunctionTimer {
  FunctionTimer(char const * name) : mName(name), mStartTime(clock()) { }
  ~FunctionTimer() { mFunctionTimes[mName] += clock() - mStartTime; }

  static void report()
  {
    ... iterate through mFunctionTimes, printing out
    the names and accumulated ticks ...
  }

  std::string mName;
  clock_t mStartTime;

  static std::map<std::string, clock_t> mFunctionTimes;
};

...

void myfunc()
{
  FunctionTimer ft("myfunc");
  ... code of myfunc ...
}

...

int main(int argc, char* argv[])
{
  ... do stuff ...
  FunctionTimer::report();
}
like image 191
Scott Howlett Avatar answered Oct 27 '22 06:10

Scott Howlett


An ugly solution is to start and stop timers around each function of interest, adding the time to some global variable after each call. Then, at the end of the main, you just compare the variables to calculate percent time.

This, however, could get really gross, especially if there are lots of functions. If you're familiar with an aspect oriented flavor of C++, you could temporarily use that, because an aspect would let you more easily put that boilerplate code around all your functions.

like image 33
Cannoliopsida Avatar answered Oct 27 '22 06:10

Cannoliopsida