Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

what does the -p and -g flag in compiler

Tags:

c

profiling

I have been profiling a C code and to do so I compiled with -p and -g flags. So I was wandering what do these flags actually do and what overhead do they add to the binary? Thanks

like image 658
Syntax_Error Avatar asked Dec 07 '11 21:12

Syntax_Error


People also ask

What is flag in compiler?

Compiler flags are options you give to gcc when it compiles a file or set of files. You may provide these directly on the command line, or your development tools may generate them when they invoke gcc. This section describes just the flags that are specific to Objective-C.

What is the purpose of the flag when compiling?

Compile-time flags are boolean values provided through the compiler via a macro method. They allow to conditionally include or exclude code based on compile time conditions. There are several default flags provided by the compiler with information about compiler options and the target platform.

What does flag in gcc do?

Turning on optimization flags makes the compiler attempt to improve the performance and/or code size at the expense of compilation time and possibly the ability to debug the program. The compiler performs optimization based on the knowledge it has of the program.

What is flag in g ++ compiler?

(debug) Inserting the `g' flag tells the compiler to insert more information about the source code into the executable than it normally would. This makes use of a debugger such as gdb much easier, since it will be able to refer to variable names that occur in the source code.


2 Answers

Assuming you are using GCC, you can get this kind of information from the GCC manual

http://gcc.gnu.org/onlinedocs/gcc/Debugging-Options.html#Debugging-Options

-p

Generate extra code to write profile information suitable for the analysis program prof. You must use this option when compiling the source files you want data about, and you must also use it when linking.

-g

Produce debugging information in the operating system's native format (stabs, COFF, XCOFF, or DWARF 2). GDB can work with this debugging information.

On most systems that use stabs format, -g enables use of extra debugging information that only GDB can use; this extra information makes debugging work better in GDB but will probably make other debuggers crash or refuse to read the program. If you want to control for certain whether to generate the extra information, use -gstabs+, -gstabs, -gxcoff+, -gxcoff, or -gvms (see below).

GCC allows you to use -g with -O. The shortcuts taken by optimized code may occasionally produce surprising results: some variables you declared may not exist at all; flow of control may briefly move where you did not expect it; some statements may not be executed because they compute constant results or their values were already at hand; some statements may execute in different places because they were moved out of loops.

Nevertheless it proves possible to debug optimized output. This makes it reasonable to use the optimizer for programs that might have bugs.

like image 190
hugomg Avatar answered Sep 24 '22 01:09

hugomg


-p provides information for prof, and -pg provides information for gprof.

Let's look at the latter. Here's an explanation of how gprof works, but let me condense it here.

When a routine B is compiled with -pg, some code is inserted at the routine's entry point that looks up which routine is calling it, say A. Then it increments a counter saying that A called B.

Then when the code is executed, two things are happening. The first is that those counters are being incremented. The second is that timer interrupts are occurring, and there is a counter for each routine, saying how many of those interrupts happened when the PC was in the routine.

The timer interrupts happen at a certain rate, like 100 times per second. Then if, for example, 676 interrupts occurred in a routine, you can tell that its "self time" was about 6.76 seconds, spread over all the calls to it.

What the call counts allow you to do is add them up to tell how many times a routine was called, so you can divide that into its total self time to estimate how much self time per call. Then from that you can start to estimate "cumulative time". That's the time spent in a routine, plus time spent in the routines that it calls, and so on down to the bottom of the call tree.

This is all interesting technology, from 1982, but if your goal is to find ways to speed up your program, it has a lot of issues.

like image 42
Mike Dunlavey Avatar answered Sep 24 '22 01:09

Mike Dunlavey