I have a command I would to run to generate random string:
var=`< /dev/urandom tr -dc _A-Z-a-z-0-9 | head -c8`
When I run this command in interactive bash session I get absolutely no errors. But when I put this command into script and run it as a script I get Broken pipe error indicated by tr. I've read several related topics but still has no answer why script and interactive behavior is different and is there a way to control it with shell options or with something else?
Edit I:
In regards to comments given I found that indicating broken pipe errors can be controlled via:
trap - SIGPIPE # to ignore errors
and
trap "" SIGPIPE # to display errors
Edit II:
Well, I've provided incorrect information about reproduction conditions. Finally it seems that problem caused with the python wrapper that called the script with the os.system():
python -c "import os; os.system('sh -c \"< /dev/urandom tr -dc _A-Z-a-z-0-9 | head -c8\"')"
Line given produces broken pipe errors independently of used OS.
Edit III:
This topic has been discussed here: https://mail.python.org/pipermail/python-dev/2005-September/056341.html
If one of the parent processes traps sigpipe
, then the pipeline will inherit the ignore
signal disposition, which will cause this problem you're experiencing.
This can be (safely) reproduced with:
( trap '' pipe; var=`< /dev/urandom tr -dc _A-Z-a-z-0-9 | head -c8` )
Normally, the head -c8
command will be done pretty soon at which point its stdin
is closed. Since it's stdin
is a pipe connected to the stdout
of tr
, it now no longer makes sense for tr
to write to its stdout
. Once it tries to, the system will kill it with SIGPIPE
.
Unless tr
ignores this signal or has inherited the ignore
(SIG_IGN
) disposition for this signal from its parent. Then a write
to tr
's broken stdout
will simply cause a regular error and set errno
to EPIPE
at which point tr
will most likely stringify and outputs this error to its stderr
and exit.
This answer provides a good summary of the problem with piping from Python to head
, and shows some workarounds.
https://stackoverflow.com/a/30091579/456550
The problem seems to be that head
reads the specified (or default) number of lines from the input stream, prints them, and then quits. So an upstream program in a pipe that is still writing finds the output stream closed. In my opinion, this is a limitation in the design of head
itself. You can instead use sed
, which reads the whole stream: sed -n "1,10p"
is equivalent to head -n10
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With