Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Bash scripting, checking for errors, logging

Here's one for the bash-fu wizards. No, actually, I'm just kidding, you'll all probably know this except for me..

I'm trying to create a backup shell script. The idea is fairly simple: find files in a certain folder, older than 7 days, tar/gzip them to another directory, and remove them. The problem is, I'm not sure if I'll have enough permissions to create a tar/gzip file in the target dir. Is there any (proper) way to check if the file has been created successfully, and if so, delete the files. Otherwise, skip that part and don't destroy customers' data. I hear they are not very fond of that.

Here's what I have so far:

01: #!/bin/bash
02: 
03: ROOTDIR="/data/www"
04: 
05: TAR="${ROOTDIR}/log/svg_out_xml/export_out_ack_$(date +%Y-%m-%d).tar"
06: cd ${ROOTDIR}/exchange/export/out_ack/
07: find . -mtime +7 -type f -print0 | xargs -0 tar -cf "${TAR}"
08: gzip ${TAR}
09: find . -mtime +7 -type f -print0 | xargs -0 rm -f

Basically, I'd need to check if everything went fine on lines 7 and 8, and if so execute 9.

Additionally, I'd like to make a log file of these operations so I know everything went fine (this is a nightly cron job).

like image 755
dr Hannibal Lecter Avatar asked Mar 02 '10 10:03

dr Hannibal Lecter


3 Answers

For logging, you can arrange for all output written on standard output and/or standard error to go to a file. That way, you don't need to redirect the output of each command:

# Save standard output and standard error
exec 3>&1 4>&2
# Redirect standard output to a log file
exec 1>/tmp/stdout.log
# Redirect standard error to a log file
exec 2>/tmp/stderr.log

# Now the output of all commands goes to the log files
echo "This goes to /tmp/stdout.log"
echo "This goes to /tmp/stderr.log" 1>&2
...

# Print a message to the original standard output (e.g. terminal)
echo "This goes to the original stdout" 1>&3

# Restore original stdout/stderr
exec 1>&3 2>&4
# Close the unused descriptors
exec 3>&- 4>&-

# Now the output of all commands goes to the original standard output & error
...

To execute a command only if a previous one succeeds, you can chain them with conditionals:

# Execute command2 only if command1 succeeds, and command3 only if both succeed:
command1 && command2 && command3

# Execute command2 only if command1 fails
command1 || command2

so you can do things like

{ find . -mtime +7 -type f -print0 | xargs -0 tar -cf "${TAR}" &&
  gzip ${TAR} && 
  find . -mtime +7 -type f -print0 | xargs -0 rm -f } || 
    { echo "Something failed" 1>&2; exit 1 }

or provide details in the log output:

find . -mtime +7 -type f -print0 | xargs -0 tar -cf "${TAR}" || 
  { echo "find failed!!" 1>&2; exit 1 }
gzip ${TAR} || 
  { echo "gzip failed!!" 1>&2; exit 1 }
find . -mtime +7 -type f -print0 | xargs -0 rm -f || 
  { echo "cleanup failed!!" 1>&2; exit 1}
like image 80
Idelic Avatar answered Nov 14 '22 08:11

Idelic


For logging, you can wrap sections of your script in curly braces and redirect the stdout to a log file:

{
    script_command_1
    script_command_2
    script_command_3
} >> /path/to/log_file
like image 8
Dennis Williamson Avatar answered Nov 14 '22 06:11

Dennis Williamson


The easy way out, but with no explicit error message add -e to the shebang, i.e. #!/bin/sh -e which will cause the shell to exit if a command fails..

Cron should give you an error message through the mail I guess though.

If you want to go full blown back-up scheme though, I'd suggest you use something that has already been made. There are a bunch out there, and most work very well.

like image 6
falstro Avatar answered Nov 14 '22 07:11

falstro