I need to run a shell script in hudson. That script needs an answer from the user. To give an automatic answer I did the following command line :
yes | ./MyScript.sh
This works well in Ubuntu terminal. But when I use the same command in the Hudson job, the script will be automated and do all the needed work, but at the end, I get these two lines of error :
yes: standard output: Broken pipe
yes: write error
And this causes the failure to my Hudson job.
How should I change my command line to work well in Hudson?
But how would you explain that I dont get this error while running the script locally, but I get the error when running it remotely from a Hudson job?
When you are running it in a terminal (locally); yes
is killed by SIGPIPE
signal that is generated when it tries to write to the pipe when MyScript.sh
has already exited.
Whatever runs the command (remotely) in Hudson traps that signal (set its handler to SIG_IGN
, you can test it by running trap
command and searching for SIGPIPE in the output) and it doesn't restore the signal for new child processes (yes
and whatever runs MyScript.sh
e.g., sh
in your case). It leads to the write error (EPIPE
) instead of the signal. yes
detects the write error and reports it.
You can simply ignore the error message:
yes 2>/dev/null | ./MyScript.sh
You could also report the bug against the component that runs the pipeline. The bug is in not restoring SIGPIPE to the default handler after the child is forked. It is what programs expect when they are run in a terminal on POSIX systems. Though I don't know whether there is a standard way to do it for a java-based program. jvm
probably raises an exception for every write error so not-dying on SIGPIPE is not a problem for a java program.
It is common for daemons such as hudson process to ignore SIGPIPE signal. You don't want your daemon to die only because the process you are communicating with dies and you would check for write errors anyway.
Ordinary programs that are written to be run in a terminal do not check status of every printf()
for errors but you want them to die if programs down the pipeline die e.g., if you run source | sink
pipeline; usually you want source
process to exit as soon as possible if sink
exits.
EPIPE
write error is returned if SIGPIPE
signal is disabled (as it looks like in hudson's case) or if a program does not die on receiving it (yes
program does not defined any handlers for SIGPIPE
so it should die on receiving the signal).
I don't want to ignore the error, I want to do the right command or fix to get rid of the error.
the only way yes
process stops if it is killed or encountered a write error. If SIGPIPE
signal is set to be ignored (by the parent) and no other signal kills the process then yes
receives write error on ./MyScript.sh
exit. There are no other options if you use yes
program.
SIGPIPE
signal and EPIPE
error communicate the exact same information -- pipe is broken. If SIGPIPE
were enabled for yes
process then you wouldn't see the error. And only because you see it; nothing new happens. It just means that ./MyScript.sh
exited (successfully or unsuccessfully -- doesn't matter).
I had this error, and my problem with it is not that it output yes: standard output: Broken pipe
but rather than it returns an error code.
Because I run my script with bash strict mode including -o pipefail
, when yes "errors" it causes my script to error.
The way I avoided this is like so:
bash -c "yes || true" | my-script.sh
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With