I'd like to redirect the stdout of process proc1 to two processes proc2 and proc3:
proc2 -> stdout / proc1 \ proc3 -> stdout
I tried
proc1 | (proc2 & proc3)
but it doesn't seem to work, i.e.
echo 123 | (tr 1 a & tr 1 b)
writes
b23
to stdout instead of
a23 b23
Yes, multiple processes can read from (or write to) a pipe. But data isn't duplicated for the processes.
only one process can read a pipe at any time, multiple can write.
You can make it do so by using the pipe character '|'. Pipe is used to combine two or more commands, and in this, the output of one command acts as input to another command, and this command's output may act as input to the next command and so on.
stdin / stdout are logical names for open files that are forwarded (or initialized) by the process that has started a given process. Actually, with the standard fork-and-exec pattern the setup of those may occur already in the new process (after fork) before exec is being called.
Editor's note:
- >(…)
is a process substitution that is a nonstandard shell feature of some POSIX-compatible shells: bash
, ksh
, zsh
.
- This answer accidentally sends the output process substitution's output through the pipeline too: echo 123 | tee >(tr 1 a) | tr 1 b
.
- Output from the process substitutions will be unpredictably interleaved, and, except in zsh
, the pipeline may terminate before the commands inside >(…)
do.
In unix (or on a mac), use the tee
command:
$ echo 123 | tee >(tr 1 a) >(tr 1 b) >/dev/null b23 a23
Usually you would use tee
to redirect output to multiple files, but using >(...) you can redirect to another process. So, in general,
$ proc1 | tee >(proc2) ... >(procN-1) >(procN) >/dev/null
will do what you want.
Under windows, I don't think the built-in shell has an equivalent. Microsoft's Windows PowerShell has a tee
command though.
Like dF said, bash
allows to use the >(…)
construct running a command in place of a filename. (There is also the <(…)
construct to substitute the output of another command in place of a filename, but that is irrelevant now, I mention it just for completeness).
If you don't have bash, or running on a system with an older version of bash, you can do manually what bash does, by making use of FIFO files.
The generic way to achieve what you want, is:
subprocesses="a b c d" mypid=$$ for i in $subprocesses # this way we are compatible with all sh-derived shells do mkfifo /tmp/pipe.$mypid.$i done
for i in $subprocesses do tr 1 $i </tmp/pipe.$mypid.$i & # background! done
proc1 | tee $(for i in $subprocesses; do echo /tmp/pipe.$mypid.$i; done)
for i in $subprocesses; do rm /tmp/pipe.$mypid.$i; done
NOTE: for compatibility reasons, I would do the $(…)
with backquotes, but I couldn't do it writing this answer (the backquote is used in SO). Normally, the $(…)
is old enough to work even in old versions of ksh, but if it doesn't, enclose the …
part in backquotes.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With