Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

fork() and STDOUT/STDERR to the console from child processes

I'm writing a program that forks multiple child processes and I'd like for all of these child processes to be able to write lines to STDERR and STDOUT without the output being garbled. I'm not doing anything fancy, just emitting lines that end with a new line (that, at least in my understanding would be an atomic operation for Linux). From perlfaq it says:

Both the main process and the backgrounded one (the "child" process) share the same STDIN, STDOUT and STDERR filehandles. If both try to access them at once, strange things can happen. You may want to close or reopen these for the child. You can get around this with opening a pipe (see open) but on some systems this means that the child process cannot outlive the parent.

It says I should "close or reopen" these filehandles for the child. Closing is simple, but what does it mean by "reopen"? I've tried something like this from within my child processes and it doesn't work (the output still gets garbled):

open(SAVED_STDERR, '>&', \*STDERR) or die "Could not create copy of STDERR: $!";
close(STDERR);

# re-open STDERR
open(STDERR, '>&SAVED_STDERR') or die "Could not re-open STDERR: $!";

So, what am I doing wrong with this? What would the pipe example it alludes to look like? Is there a better way to coordinate output from multiple processes together to the console?

like image 835
mpeters Avatar asked Oct 06 '10 20:10

mpeters


2 Answers

Writes to a filehandle are NOT atomic for STDOUT and STDIN. There are special cases for things like fifos but that's not your current situation.

When it says re-open STDOUT what that means is "create a new STDOUT instance" This new instance isn't the same as the one from the parent. It's how you can have multiple terminals open on your system and not have all the STDOUT go to the same place.

The pipe solution would connect the child to the parent via a pipe (like | in the shell) and you'd need to have the parent read out of the pipe and multiplex the output itself. The parent would be responsible for reading from the pipe and ensuring that it doesn't interleave output from the pipe and output destined to the parent's STDOUT at the same time. There's an example and writeup here of pipes.

A snippit:

use IO::Handle;

pipe(PARENTREAD, PARENTWRITE);
pipe(CHILDREAD, CHILDWRITE);

PARENTWRITE->autoflush(1);
CHILDWRITE->autoflush(1);

if ($child = fork) { # Parent code
   chomp($result = <PARENTREAD>);
   print "Got a value of $result from child\n";
   waitpid($child,0);
} else {
   print PARENTWRITE "FROM CHILD\n";
   exit;
}

See how the child doesn't write to stdout but rather uses the pipe to send a message to the parent, who does the writing with its stdout. Be sure to take a look as I omitted things like closing unneeded file handles.

like image 78
Paul Rubel Avatar answered Nov 13 '22 01:11

Paul Rubel


While this doesn't help your garbleness, it took me a long time to find a way to launch a child-process that can be written to by the parent process and have the stderr and stdout of the child process sent directly to the screen (this solves nasty blocking issues you may have when trying to read from two different FD's without using something fancy like select).

Once I figured it out, the solution was trivial

my $pid = open3(*CHLD_IN, ">&STDERR", ">&STDOUT", 'some child program');
# write to child
print CHLD_IN "some message";
close(CHLD_IN);
waitpid($pid, 0);

Everything from "some child program" will be emitted to stdout/stderr, and you can simply pump data by writing to CHLD_IN and trust that it'll block if the child's buffer fills. To callers of the parent program, it all just looks like stderr/stdout.

like image 28
EJ Campbell Avatar answered Nov 13 '22 02:11

EJ Campbell