I'm now executing a series of bash scripts and I'd like to capture the value of the command I run in an output file.
I thought this was possible by combining my command with | tee -a outputfile.txt
but this is captured only for script 1.
Script 2..4 etc are not appending the output to the file.
Looking around, output can be also redirected in this way - 1>&1
- but I am a bit confused as sometimes I see number 1 replaced by 2. I guess this has to do with the type of messaging. However, as I don't know how this output redirection is call I'm stuck in finding info.
Any help? Thanks Andrea
There are multiple ways to achieve it:
exec >outputfile.txt
command1
command2
command3
command4
This changes the standard output of the entire script to the log file.
My generally preferred way to do it is:
{
command1
command2
command3
command4
} > outputfile.txt
This does I/O redirection for all the commands within the scope of the braces. Be careful, you have to treat both {
and }
as if they were commands; they cannot appear just anywhere. This does not create a sub-shell — which is the main reason I favour it.
You can replace the braces with parentheses:
(
command1
command2
command3
command4
) > outputfile.txt
You can be more cavalier about the placement of the parentheses than the braces, so:
(command1
command2
command3
command4) > outputfile.txt
would also work (but do that with the braces and the shell will fail to find a command {command1
(unless you happen to have a file around that's executable and ...). This creates a sub-shell. Any variable assignments done within the parentheses will not be seen/accessible outside the parentheses. This can be a show-stopper sometimes (but not always). The incremental cost of a sub-shell is pretty much negligible; it exists, but you're likely to be hard-pressed to measure it.
There's also the long-hand way:
command1 >>outputfile.txt
command2 >>outputfile.txt
command3 >>outputfile.txt
command4 >>outputfile.txt
If you wish to demonstrate that you're a neophyte shell programmer, by all means use this technique. If you wish to be considered as a more advanced shell programmer, do not.
Note that all the commands above redirect just standard output to the named file, leaving standard error going to the original destination (usually the terminal). If you want to get standard error to go to the same file, simply add 2>&1
(meaning, send file descriptor 2, standard error, to the same place as file descriptor 1, standard output) after the redirection for standard output.
Addressing questions raised in the comments.
By using the
2>&1 >> $_logfile
(as per my answer below) I got what I need, but now I do have also my echo ... in the output file. Is there a way to print them on screen as well as the file at the same time?
Ah, so you don't want everything to go to the file...that complicates things a bit. There are ways, of course; not necessarily straight-forward. I'd probably use exec 3>&1;
to set file descriptor 3 going to the same place as 1 (standard output — or use 3>&2
if I wanted the echoes to standard error) before the main redirection. Then I'd create a function echoecho() { echo "$*"; echo "$*" >&3; }
and I'd use echoecho Whatever
to do the echoing. When you're done with file descriptor 3 (if you're not about to exit, when the system will close it) you can close it with exec 3>&-
.
When you refer to
exec
that's supposed to be the command I'm executing in the individual script file I created and that I will execute in between the cycle right? (just have a look at my answer below to see how I have evolved the script). For the rest of the suggestion I completely lost you.
No; I'm referring the the Bash (shell) built-in command exec
. It can be used to do I/O redirection permanently (for the rest of the script), or to replace the current script with a new program, as in exec ls -l
— which is probably a bad example.
I guess now I even more confused than when I started :) Would that be possible create a small example … so I can understand it better?
The disadvantage of comments is that they're hard to format and limited in size. Those limitations are also benefits, but there comes a time when the answer has to be extended. Said time has arrived.
For the sake of discussion, I'm going to restrict myself to 2 commands instead of 4 as in the question (but this doesn't lose any generality). Those commands will be cmd1
and cmd2
, and in fact those are two different names for the same script:
#!/bin/bash
for i in {01..10}
do
echo "$0: stdout $i - $*"
echo "$0: stderr $i - error message" >&2
done
As you can see, this script writes messages to both standard output and standard error. For example:
$ ./cmd1 trying to work
./cmd1: stdout 1 - trying to work
./cmd1: stderr 1 - error message
./cmd1: stdout 2 - trying to work
./cmd1: stderr 2 - error message
…
./cmd1: stdout 9 - trying to work
./cmd1: stderr 9 - error message
./cmd1: stdout 10 - trying to work
./cmd1: stderr 10 - error message
$
Now, from the answer posted by Andrea Moro we find:
#!/bin/bash
_logfile="output.txt"
# Delete output file if exist
if [ -f $_logfile ];
then
rm $_logfile
fi
for file in ./shell/*
do
$file 2>&1 >> $_logfile
done
I don't like the variable name starting _
; there's no need for it that I can see. This redirects errors to where standard output is (currently) going, and then redirects standard output to the log file. So, if the sub-directory shell
contains cmd1
and cmd2
, the output is:
$ bash ex1.sh
./shell/cmd1: stderr 1 - error message
./shell/cmd1: stderr 2 - error message
…
./shell/cmd1: stderr 9 - error message
./shell/cmd1: stderr 10 - error message
./shell/cmd2: stderr 1 - error message
./shell/cmd2: stderr 2 - error message
…
./shell/cmd2: stderr 9 - error message
./shell/cmd2: stderr 10 - error message
$
To get both standard output and standard error to the file, you have use one of:
2>>$_logfile >>$_logfile
>>$_logfile 2>&1
I/O redirections are generally processed from left to right, except that piping controls where standard output (and standard error if you use |&
) goes to before the I/O redirections are handled.
Adapting this script to generate information to standard output as well as logging to the log file, there are a variety of ways of working. I'm assuming the shebang line is #!/bin/bash
from here on.
logfile="output.txt"
rm -f $logfile
for file in ./cmd1 ./cmd2
do
$file trying to work >> $logfile 2>&1
done
This removes the log file if it exists (but less verbosely than before). Everything on standard output and standard error goes to the log file. We could also write:
logfile="output.txt"
{
for file in ./cmd1 ./cmd2
do
$file trying to work
done
} >$logfile 2>&1
Or the code could use parentheses in place of the braces with only minor differences in functionality that wouldn't affect this script materially. Or, indeed, in this case, we could use:
logfile="output.txt"
for file in ./cmd1 ./cmd2
do
$file trying to work
done >$logfile 2>&1
And it is not clear that the variable is necessary, but we'll leave it in place. Note that both these use 'clobbering' I/O redirection because they create the log file just once, which in turn means there was no need to remove it (though there might be reasons to do so — related to other users running the command beforehand and leaving a non-writable file behind, but then you should probably have a date-stamped log file anyway so that isn't a problem after all).
Clearly, if we want to echo something to the original standard output as well as to the log file, we have to do something different as both standard error and standard output are going to the log file.
One option is:
logfile="output.txt"
rm -f $logfile
for file in ./cmd1 ./cmd2
do
echo $file $(date +'%Y-%m-%d %H:%M:%S')
$file trying to work >> $logfile 2>&1
done
Another option is:
exec 3>&1
logfile="output.txt"
for file in ./cmd1 ./cmd2
do
echo $file $(date +'%Y-%m-%d %H:%M:%S') >&3
$file trying to work
done >$logfile 2>&1
exec 3>&-
Now file descriptor 3 goes to the same place as the original standard output. Inside the loop, both standard output and standard error go to the log file, but the echo … >&3
sends the standard output of echo
to file descriptor 3.
If you want the same echoed output to go to both the redirected standard output and the original standard output, then you can use:
exec 3>&1
echoecho()
{
echo "$*"
echo "$*" >&3
}
logfile="output.txt"
for file in ./cmd1 ./cmd2
do
echoecho $file $(date +'%Y-%m-%d %H:%M:%S')
$file trying to work
done >$logfile 2>&1
exec 3>&-
The output from this was:
$ bash ex3.sh
./cmd1 2014-01-07 14:57:13
./cmd2 2014-01-07 14:57:13
$ cat output.txt
./cmd1 2014-01-07 14:57:13
./cmd1: stdout 1 - trying to work
./cmd1: stderr 1 - error message
./cmd1: stdout 2 - trying to work
./cmd1: stderr 2 - error message
…
./cmd1: stdout 9 - trying to work
./cmd1: stderr 9 - error message
./cmd1: stdout 10 - trying to work
./cmd1: stderr 10 - error message
./cmd2 2014-01-07 14:57:13
./cmd2: stdout 1 - trying to work
./cmd2: stderr 1 - error message
./cmd2: stdout 2 - trying to work
./cmd2: stderr 2 - error message
…
./cmd2: stdout 9 - trying to work
./cmd2: stderr 9 - error message
./cmd2: stdout 10 - trying to work
./cmd2: stderr 10 - error message
$
This is roughly what I was saying in my comments, written out in full.
Let's say you have scripts in files sc1.sh, sc2.sh, and sc3.sh and you want to write the outputs to a file named log.txt, then you can do the following :
./sc1.sh >> log.txt
./sc2.sh >> log.txt
./sc3.sh >> log.txt
or, if you prefer to automate the process by writing another bash script to loop through the other bash scripts :
#!/bin/bash
n=1
num_scripts=3
while test $n -le $num_scripts
do
script$n.sh >> log.txt
n=$[n+1]
done
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With