Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is it safe to disable buffering with stdout and stderr?

Sometimes we put some debug prints in our code this way

printf("successfully reached at debug-point 1\n"); 

some code is here

printf("successfully reached at debug-point 2"); 

After the last printf a segmentation fault occurs.

Now in this condition only debug-point1 will be print on stdio debug-point 2 print was written to stdio buffer but its not flushed because it didn't get \n so we thinks that crash occur after debug-point1.

To over come from this, if I disable buffering option with stdio and stderr stream like this way

setvbuf(stdout, NULL, _IONBF, 0);
setvbuf(stderr, NULL, _IONBF, 0);

Then, is this safe to do this?

Why are all streams by default line buffered ?

Edit :

Usually what is the size of such by default allocated buffer for any file stream? I think it's OS dependent. I would like to know about Linux.

like image 720
Jeegar Patel Avatar asked Feb 11 '12 07:02

Jeegar Patel


People also ask

Is stdout a buffer?

stdout is buffered (line buffered if connected to a terminal) stderr is unbuffered.

How do I stop buffering in Python?

By default, Python buffers output to standard output (stdout) and standard error (stderr). This means that output from your code might not show up immediately, making debugging harder. To disable output buffering, you can run Python with the -u option or by setting the PYTHONUNBUFFERED environment variable.

What is the difference between stdout and stderr?

stdout − It stands for standard output, and is used to text output of any command you type in the terminal, and then that output is stored in the stdout stream. stderr − It stands for standard error. It is invoked whenever a command faces an error, then that error message gets stored in this data stream.


5 Answers

It is "safe" in one sense, and unsafe in another. It is unsafe to add debug printfs, and for the same reason unsafe to add code to modify the stdio buffering, in the sense that it is a maintenance nightmare. What you are doing is NOT a good debugging technique. If your program gets a segfault, you should simply examine the core dump to see what happened. If that is not adequate, run the program in a debugger and step through it to follow the action. This sounds difficult, but it's really very simple and is an important skill to have. Here's a sample:

$ gcc -o segfault -g segfault.c   # compile with -g to get debugging symbols
$ ulimit -c unlimited             # allow core dumps to be written
$ ./segfault                      # run the program
Segmentation fault (core dumped)
$ gdb -q segfault /cores/core.3632  # On linux, the core dump will exist in
                                    # whatever directory was current for the
                                    # process at the time it crashed.  Usually
                                    # this is the directory from which you ran
                                    # the program.
Reading symbols for shared libraries .. done
Reading symbols for shared libraries . done
Reading symbols for shared libraries .. done
#0  0x0000000100000f3c in main () at segfault.c:5
5               return *x;          <--- Oh, my, the segfault occured at line 5
(gdb) print x                       <--- And it's because the program dereferenced
$1 = (int *) 0x0                     ... a NULL pointer.
like image 160
William Pursell Avatar answered Nov 18 '22 04:11

William Pursell


A possible way might be to have a bool dodebug global flag and define a macro like e.g.

#ifdef NDEBUG
#define debugprintf(Fmt,...) do{} while(0)
#else
#define debugprintf(Fmt,...) do {if (dodebug) {                 \
   printf("%s:%d " Fmt "\n", __FILE__, __LINE__, ##__VA_ARGS__); \
   fflush(stdout); }} while(0)
#endif

Then inside your code, have some

debugprintf("here i=%d", i);

Of course, you could, in the macro above, do fprintf instead... Notice the fflush and the appended newline to the format.

Disabling buffering should probably be avoided for performance reasons.

like image 23
Basile Starynkevitch Avatar answered Nov 18 '22 05:11

Basile Starynkevitch


why all stream are by default line buffered

They are buffered for performance reasons. The library tries hard to avoid making the system call because it takes long. And not all of them are buffered by default. For instance stderr is usually unbuffered and stdout is line-buffered only when it refers to a tty.

then is this safe to do this?

It is safe to disable buffering but I must say it's not the best debugging technique.

like image 24
cnicutar Avatar answered Nov 18 '22 06:11

cnicutar


Uh, well. You're wrong. Precisely for this reason, stderr is not buffered by default.

EDIT: Also, as a general suggestion, try using debugger breakpoints instead of printfs. Makes life much easier.

like image 27
aviraldg Avatar answered Nov 18 '22 04:11

aviraldg


If your program writes a lot of output, disabling buffering will likely make it somewhere between 10 and 1000 times slower. This is usually undesirable. If your aim is just consistency of output when debugging, try adding explicit fflush calls where you want output flushed rather than turning off buffering globally. And preferably don't write crashing code...

like image 2
R.. GitHub STOP HELPING ICE Avatar answered Nov 18 '22 06:11

R.. GitHub STOP HELPING ICE