I've grown fond of using a generator-like pattern between functions in my shell scripts. Something like this:
parse_commands /da/cmd/file | process_commands
However, the basic problem with this pattern is that if parse_command encounters an error, the only way I have found to notify process_command that it failed is by explicitly telling it (e.g. echo "FILE_NOT_FOUND"). This means that every potentially faulting operation in parse_command would have to be fenced.
Is there no way process_command can detect that the left side exited with a non-zero exit code?
Use set -o pipefail
on top of your bash script so that when the left side of the pipe fails (exit status != 0), the right side does not execute.
Does the pipe process continue even if the first process has ended, or is the issue that you have no way of knowing that the first process failed?
If it's the latter, you can look at the PIPESTATUS
variable (which is actually a BASH array). That will give you the exit code of the first command:
parse_commands /da/cmd/file | process_commands
temp=("${PIPESTATUS[@]}")
if [ ${temp[0]} -ne 0 ]
then
echo 'parse_commands failed'
elif [ ${temp[1]} -ne 0 ]
then
echo 'parse_commands worked, but process_commands failed'
fi
Otherwise, you'll have to use co-processes.
Unlike the and operator (&&), the pipe operator (|) works by spawning both processes simultaneously, so the first process can pipe its output to the second process without the need of buffering the intermediate data. This allows for processing of large amounts of data with little memory or disk usage.
Therefore, the exit status of the first process wouldn't be available to the second one until it's finished.
You could try some work arround using a fifo:
mkfifo /tmp/a
cat /tmp/a | process_commands &
parse_cmd /da/cmd/file > /tmp/a || (echo "error"; # kill process_commands)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With