I have finished my earlier multithreaded program that uses perl threads and it works on my system. The problem is that on some systems that it needs to run on, thread support is not compiled into perl and I cannot install additional packages. I therefore need to use something other than threads, and I am moving my code to using fork(). This works on my windows system in starting the subtasks.
A few problems:
How to determine when the child process exits? I created new threads when the thread count was below a certain value, I need to keep track of how many threads are running. For processes, how do I know when one exits so I can keep track of how many exist at the time, incrementing a counter when one is created and decrementing when one exits?
Is file I/O using handles obtained with OPEN when opened by the parent process safe in the child process? I need to append to a file for each of the child processes, is this safe on unix as well.
Is there any alternative to fork and threads? I tried use Parallel::ForkManager, but that isn't installed on my system (use Parallel::ForkManager; gave an error) and I absolutely require that my perl script work on all unix/windows systems without installing any additional modules.
Typical usage:
use POSIX ':sys_wait_h'; # for &WNOHANG
# how to create a new background process
$pid = fork();
if (!defined $pid) { die "fork() failed!" }
if ($pid == 0) { # child
# ... do stuff in background ...
exit 0; # don't forget to exit or die from the child process
}
# else this is the parent, $pid contains process id of child process
# ... do stuff in foreground ...
# how to tell if a process is finished
# also see perldoc perlipc
$pid = waitpid -1, 0; # blocking wait for any process
$pid = wait; # blocking wait for any process
$pid = waitpid $mypid, 0; # blocking wait for process $mypid
# after blocking wait/waitpid
if ($pid == -1) {
print "All child processes are finished.\n";
} else {
print "Process $pid is finished.\n";
print "The exit status of process $pid was $?\n";
}
$pid = waitpid -1, &WNOHANG; # non-blocking wait for any process
$pid = waitpid $mypid, 0; # blocking wait for process $mypid
if ($pid == -1) {
print "No child processes have finished since last wait/waitpid call.\n";
} else {
print "Process $pid is finished.\n";
print "The exit status of process $pid was $?\n";
}
# terminating a process - see perldoc -f kill or perldoc perlipc
# this can be flaky on Windows
kill 'INT', $pid; # send SIGINT to process $pid
Gory details in perldoc -f fork
, waitpid
, wait
, kill
, and perlipc
. The stuff in perlipc
about setting up a handler for SIGCHLD
events should be particularly helpful, though that isn't supported on Windows.
I/O across forked processes is generally safe on Unix and Windows. File descriptors are shared, so for something like this
open X, ">", $file;
if (fork() == 0) { # in child
print X "Child\n";
close X;
exit 0;
}
# in parent
sleep 1;
print X "Parent\n";
close X;
both child and parent processes will successfully write to the same file (be aware of output buffering, though).
Take a look at waitpid
. Here is some code that has nine tasks that need to be done (1 through 9). It will start up to three workers to do those tasks.
#!/usr/bin/perl
use strict;
use warnings;
use POSIX ":sys_wait_h";
my $max_children = 3;
my %work = map { $_ => 1 } 1 .. 9;
my @work = keys %work;
my %pids;
while (%work) {
#while there are still empty slots
while (@work and keys %pids < $max_children) {
#get some work for the child to do
my $work = shift @work;
die "could not fork" unless defined(my $pid = fork);
#parent
if ($pid) {
$pids{$pid} = 1;
next;
}
#child
print "$$ doing work $work\n";
sleep 1;
print "$$ done doing work $work";
exit $work;
}
my $pid = waitpid -1, WNOHANG;
if ($pid > 0) {
delete $pids{$pid};
my $rc = $? >> 8; #get the exit status
print "saw $pid was done with $rc\n";
delete $work{$rc};
print "work left: ", join(", ", sort keys %work), "\n";
}
select undef, undef, undef, .25;
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With