Here is minimal code for issue demonstration: http://pastebin.com/5TXDpSh5
#!/bin/bash
set -e
set -o pipefail
function echoTraps() {
echo "= on start:"
trap -p
trap -- 'echo func-EXIT' EXIT
echo "= after set new:"
trap -p
# we can ensure after script done - file '/tmp/tmp.txt' was not created
trap -- 'echo SIG 1>/tmp/tmp.txt' SIGPIPE SIGHUP SIGINT SIGQUIT SIGTERM
}
trap -- 'echo main-EXIT1' EXIT
echo "===== subshell trap"
( echoTraps; )
echo "===== pipe trap"
echoTraps | cat
echo "===== done everything"
===== subshell trap
= on start:
= after set new:
trap -- 'echo func-EXIT' EXIT
func-EXIT
===== pipe trap
= on start:
= after set new:
trap -- 'echo func-EXIT' EXIT
===== done everything
main-EXIT1
===== subshell trap
= on start:
= after set new:
trap -- 'echo func-EXIT' EXIT
func-EXIT
===== pipe trap
= on start:
= after set new:
trap -- 'echo func-EXIT' EXIT
func-EXIT <---- here is the expected difference
===== done everything
main-EXIT1
NB: i tested for OSX 10.9.2 bash (3.2.51) - other versions of bash has same difference between actual an expected output, and described bellow
The only way to find out if this behavior is expected or not is to ask Chet Ramey (GNU bash maintainer). Please send an email with your report to [email protected]
You can see that the current behavior seems to be correct, given that it handles the subshell case explicitly in:http://git.savannah.gnu.org/cgit/bash.git/tree/execute_cmd.c#n621
/* We want to run the exit trap for forced {} subshells, and we
want to note this before execute_in_subshell modifies the
COMMAND struct. Need to keep in mind that execute_in_subshell
runs the exit trap for () subshells itself. */
/* This handles { command; } & */
s = user_subshell == 0 && command->type == cm_group && pipe_in == NO_PIPE && pipe_out == NO_PIPE && asynchronous;
/* run exit trap for : | { ...; } and { ...; } | : */
/* run exit trap for : | ( ...; ) and ( ...; ) | : */
s += user_subshell == 0 && command->type == cm_group && (pipe_in != NO_PIPE || pipe_out != NO_PIPE) && asynchronous == 0;
last_command_exit_value = execute_in_subshell (command, asynchronous, pipe_in, pipe_out, fds_to_close);
if (s)
subshell_exit (last_command_exit_value);
else
sh_exit (last_command_exit_value);
As you can see, the explicit subshell case is handled as a special case (and so is the case with the command grouping). This behavior has evolved historically, as Adrian found out, due to multiple bug reports.
This is the list of changes for this particular feature (triggering EXIT trap on subshells):
Commit: http://git.savannah.gnu.org/cgit/bash.git/commit/?id=a37d979e7b706ce9babf1306c6b370c327038eb9
+execute_cmd.c
+ - execute_command_internal: make sure to run the EXIT trap for group
+ commands anywhere in pipelines, not just at the end. From a point
+ raised by Andreas Schwab <[email protected]>
Report: https://lists.gnu.org/archive/html/bug-bash/2013-04/msg00126.html (Re: trap EXIT in piped subshell not triggered during wait)
Commit: http://git.savannah.gnu.org/cgit/bash.git/commit/?id=1a81420a36fafc5217e770e042fd39a1353a41f9
+execute_cmd.c
+ - execute_command_internal: make sure any subshell forked to run a
+ group command or user subshell at the end of a pipeline runs any
+ EXIT trap it sets. Fixes debian bash bug 698411
+ http://bugs.debian.org/cgi-big/bugreport.cgi?bug=698411
Report: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=698411 (EXIT trap and pipeline and subshell)
Commit: http://git.savannah.gnu.org/cgit/bash.git/commit/?id=fd58d46e0d058aa983eea532bfd7d4c597adef54
+execute_cmd.c
+ - execute_command_internal: make sure to call subshell_exit for
+ {} group commands executed asynchronously (&). Part of fix for
+ EXIT trap bug reported by Maarten Billemont <[email protected]>
Report: http://lists.gnu.org/archive/html/bug-bash/2012-07/msg00084.html (EXIT traps in interactive shells)
There is also a recent bug report in relation to the EXIT trap not executing in some expected contexts: http://lists.gnu.org/archive/html/bug-bash/2016-11/msg00054.html
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With