Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

how fprintf behavior when multi-threaded and multi-processed?

Tags:

c

linux

printf

Here is process a and b, both of which are multithreaded.

  1. a forks b and b immediatly execs one new program;
  2. a dups and freopens stderr to the logfile (a is defacto apache's httpd2.22)
  3. b inherits the opened stderr from a. ( i am adapting apache httpd, b is my program), and b uses fprintf(stderr....) for logging
  4. so a, b share the same file for logging
  5. there is no lock mechanism for a, b to write log

I found that some log msg are interleaving, and a little bit of log msg got lost.

Can the two writers to the same file implicitly lock each other out?

The more important question is: If we use fprintf only within one single multithreaded process, fprintf is thread safe, i.e. one call of fprintf won't intervene another fprintf call in another thread? Many articles said this, but this is not easy to ensure myself, so I ask for help here.

A: the code for duplicate the fd is like this:

......
rv = apr_file_dup2(stderr_log, s_main->error_log, stderr_p);//dup the stderr to the logfile
apr_file_close(s_main->error_log);//here ,2 fd point to the same file description,so close one of 

then

B:apache it self use this manner for logging:

......
if (rv != APR_SUCCESS) {
    ap_log_error(APLOG_MARK, APLOG_CRIT, rv, s_main, ".........");

C:for convenience,i logging in this way:

fprintf(stderr,".....\n")

I am quite sure apache and me use the same fd for file writing.

like image 704
basketballnewbie Avatar asked Sep 10 '25 19:09

basketballnewbie


1 Answers

If you're using a single FILE object to perform output on an open file, then whole fprintf calls on that FILE will be atomic, i.e. lock is held on the FILE for the duration of the fprintf call. Since a FILE is local to a single process's address space, this setup is only possible in multi-threaded applications; it does not apply to multi-process setups where several different processes are accessing separate FILE objects referring to the same underlying open file. Even though you're using fprintf here, each process has its own FILE it can lock and unlock without the others seeing the changes, so writes can end up interleaved. There are several ways to prevent this from happening:

  1. Allocate a synchronization object (e.g. a process-shared semaphore or mutex) in shared memory and make each process obtain the lock before writing to the file (so only one process can write at a time); OR

  2. Use filesystem-level advisory locking, e.g. fcntl locks or the (non-POSIX) BSD flock interface; OR

  3. Instead of writing directly to the log file, write to a pipe that another process will feed into the log file. Writes to a pipe are guaranteed (by POSIX) to be atomic as long as they are smaller than PIPE_BUF bytes long. You cannot use fprintf in this case (since it might perform multiple underlying write operations), but you could use snprintf to a PIPE_BUF-sized buffer followed by write.

like image 198
R.. GitHub STOP HELPING ICE Avatar answered Sep 13 '25 10:09

R.. GitHub STOP HELPING ICE