The following snakemake script:
rule all:
input:
'test.done'
rule pipe:
output:
'test.done'
shell:
"""
seq 1 10000 | head > test.done
"""
fails with the following error:
snakemake -s test.snake
Provided cores: 1
Rules claiming more threads will be scaled down.
Job counts:
count jobs
1 all
1 pipe
2
rule pipe:
output: test.done
jobid: 1
Error in job pipe while creating output file test.done.
RuleException:
CalledProcessError in line 9 of /Users/db291g/Tritume/test.snake:
Command '
seq 1 10000 | head > test.done
' returned non-zero exit status 141.
File "/Users/db291g/Tritume/test.snake", line 9, in __rule_pipe
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/concurrent/futures/thread.py", line 55, in run
Removing output files of failed job pipe since they might be corrupted:
test.done
Will exit after finishing currently running jobs.
Exiting because a job execution failed. Look above for error message
The explanation returned non-zero exit status 141 seems to say that snakemake has caught the SIGPIPE fail sent by head
. I guess strictly speaking snakemake is doing the right thing in catching the fail, but I wonder if it would be possible to ignore some types of errors like this one. I have a snakemake script using the head
command and I'm trying to find a workaround this error.
Yes, Snakemake sets pipefail by default, because in most cases this is what people implicitly expect. You can always deactivate it for specific commands by prepending set +o pipefail;
to the shell command.
A somehow clunky solution is to append || true
to the script. This will make the command always exit cleanly, which is not acceptable. To check whether the script actually succeded you can query the array variable ${PIPESTATUS[@]}
to ensure it contains the expected exit codes:
This script is ok:
seq 1 10000 | head | grep 1 > test.done || true
echo ${PIPESTATUS[@]}
141 0 0
This is not ok:
seq 1 10000 | head | FOOBAR > test.done || true
echo ${PIPESTATUS[@]}
0
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With