The following works:
/usr/bin/mysqldump -u[username] -p[password] --all-databases --single-transaction > /home/[domain]/public_html/backups/full_backup_`date -I`.sql
It results in "full_backup_2012-11-04.sql"
The cron job is going every minute but the filename is only for once a day... I wanted to include the hour and minute in the filename as well.... (in the end the cron job might be run every hour or so)
So date -I works... other typical options for date don't seem to work... is there any documentation that says -I is valid? If so that documentation might also have other options that work.
Use date +%Y-%m-%d_%H-%M-%S
instead (or any other format):
EDITED
To avoid long subjects in emails from CRON, use create a file /home/<your user>/mycron.sh
(file name and location is just an example):
#!/bin/sh
/usr/bin/mysqldump -u[username] -p[password] --all-databases --single-transaction > /home/[domain]/public_html/backups/full_backup_`date +%Y-%m-%d_%H-%M-%S`.sql
Make sure you chmod +x /home/<your user>/mycron.sh
.
This date format will output:
full_backup_2012-11-05_08-49-19.sql
Then use in your cron the name of the script, i.e:
[cron parameters] /home/<your user>/mycron.sh
date -Iminutes
could be the way to go, which uses ISO 8601 format.
The Wiki page has some info.
While my original script works, something like this may be far simpler:
mysqldump [options] | gzip -c > /backups/data-$(date +%a-%H00).sql.gz
This specifically will keep 7 days worth of backups. It works by using the day of the week plus the hour. So it would look like data-Mon-00.sql.gz
(Monday at midnight). When the week rolls around, the previous backups get overwritten.
If you set your cron to run every 6 hours you will get up to 28 backups (24/6) * 7 = 28
.
To get a consistent backup with mysqldump you have to lock the tables, so this should only be done from a replica, or a low-volume server. mysqldump also makes a full snapshot, so you end up having a full snapshot for each backup, which can end up taking up a lot of disk space. This can quickly become unmanageable with a large database. You also may not want to trust a replica to take backups from as the replica also has to stay in sync.
A better option is xtrabackup by Percona. It's open source, so it's free. It requires InnoDB tables and is able to take a hot backup of your primary MySQL server without downtime or locks (you shouldn't be using MyISAM anyway). It uses a modified InnoDB engine with InnoDB's crash recovery feature to ensure the backups are consistent. It even takes incremental backups of a base snapshot, so you can make hundreds of backups and have it take up the size of a single snapshot. It works with MySQL, MariaDB, PerconaDB (forks of MySQL), and both InnoDB and XtraDB (improved InnoDB in both in MariaDB and Percona).
Personally I would stick with xtrabackup and not even bother with mysqldump. You have to do just as much work command-line wise, and you have the added benefit of small incremental backups. There are tools out there that automate the use of xtrabackup, even in a Galera cluster. Facebook uses it.
Original script:
The script below takes "backup.sql.gz", for example, and turns it into something like backup-13Nov2012-01_30.sql.gz, then rotates files using hard links.
#!/bin/bash
###########################################################################
# snapshot
#
# Rotates snapshots of backups using hard links
#
# Keeps track of:
# - 48 hours of snapshots (48*60/interval)
# - 60 days worth of midnight snapshots
# - 24 months of snapshots from the 1st
# Ussage:
# snapshot /path/to/backup.sql.gz
# mysqldump ... | gzip -c | snapshot /path/to/backup.sql.gz
#
###########################################################################
if ! hash date 2> /dev/null; then
echo "-snapshot: date command not found" 1>&2
exit 1
fi
if ! hash ln 2> /dev/null; then
echo "-snapshot: ln: command not found" 1>&2
exit 1
fi
# Date Info
month=$(date +"%b")
day=$(date +"%d")
year=$(date +"%Y")
time=$(date +"%H_%M")
date=$(date +"%d%b%Y")
dateFirst=$(date --date="$month 01" +"%d%b%Y" 2> /dev/null)
# Test to see if we're using GNU date or BSD date
if [ "$dateFirst" == "" ]; then
dateFirst=$(date -v1d +"%d%b%Y")
back2date=$(date -v-2d +"%d%b%Y")
back2monthY=$(date -v-2m +"%b%Y")
back2year=$(date -v-2y +"%Y")
else
back2date=$(date --date="-2 day" +"%d%b%Y")
back2monthY=$(date --date="-2 month" +"%b%Y")
back2year=$(date --date="-2 year" +"%Y")
fi
if [ "$dateFirst" == "" ]; then
echo "-snapshot: Unknown version of date command." 1>&2
exit 1
fi
# Directories
filepath=$1
backup=$2
if [ "$filepath" == "" ]; then
echo "-snapshot: Expecting filename as first argument" 1>&2
exit 1
fi
if [ "$backup" == "" ]; then
backup=/backup
fi
if [ ! -d "$backup" ]; then
echo "-snapshot: Directory "$backup" doesn't exist" 1>&2
exit 1
fi
snapshots=$backup/snapshots
daily=$backup/daily
monthly=$backup/monthly
basename=${filepath##*/}
ext=${basename#*.}
basename=${basename%%.*}
filename=$basename-$date-$time.$ext
##############################
# Make new snapshot
##############################
if [ ! -d "$snapshots/$date" ]; then
mkdir -p "$snapshots/$date"
fi
if [ -t 0 ]; then
if [ ! -f "$filepath" ]; then
echo "-snapshot: '$filepath' doesn't exist" 1>&2
exit 1
fi
ln "$filepath" "$snapshots/$date/$filename"
else
cat > "$snapshots/$date/$filename"
fi
##############################
# Daily/monthly snapshots
##############################
if [ "$time" == "00_00" ]; then
if [ ! -d "$daily/$month$year" ]; then
mkdir -p "$daily/$month$year"
fi
ln "$snapshots/$date/$filename" "$daily/$month$year/$filename"
if [ "$day" == "01" ]; then
if [ ! -d "$monthly/$year" ]; then
mkdir -p "$monthly/$year"
fi
ln "$snapshots/$date/$filename" "$monthly/$year/$filename"
fi
fi
##############################
# Clean up old snapshots
##############################
if [ -d "$snapshots/$back2date" ]; then
rm -fr "$snapshots/$back2date"
fi
if [ -d "$daily/$back2monthY" ]; then
rm -fr "$daily/$back2monthY"
fi
if [ -d "$monthly/$back2year" ]; then
rm -fr "$monthly/$back2year"
fi
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With