How to Automate Database Backups with Cron Jobs
A backup strategy that depends on someone remembering to run a command is not a strategy. This guide shows you how to build a fully automated backup pipeline for MySQL and PostgreSQL using cron, with rotation, offsite storage, monitoring, and restore verification.
Why Manual Backups Fail
Every team that does manual backups eventually hits the same wall: someone forgets. Maybe it is Friday at 5 PM and the person responsible leaves early. Maybe the documented process has a wrong path because the server was migrated six months ago. The database gets corrupted on Saturday, and the most recent backup is from two weeks ago.
Automated backups solve the human reliability problem. Once configured, they run every time, at the exact same time, with the exact same parameters. More importantly, they can be monitored, so you know immediately when a backup fails, rather than discovering it during a crisis.
Step 1: MySQL Backup Script
Start with a standalone bash script that handles the dump, compression, and basic error handling. Store database credentials in a protected file rather than on the command line:
#!/bin/bash
# /opt/scripts/backup-mysql.sh
set -euo pipefail
# Configuration
DB_NAME="production_db"
BACKUP_DIR="/var/backups/mysql"
RETENTION_DAYS=30
DATE=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="$BACKUP_DIR/${DB_NAME}_${DATE}.sql.gz"
LOG_FILE="/var/log/backup-mysql.log"
# Ensure backup directory exists
mkdir -p "$BACKUP_DIR"
echo "[$(date)] Starting backup of $DB_NAME" >> "$LOG_FILE"
# Dump and compress in one step
# Uses ~/.my.cnf for credentials (see below)
if mysqldump \
--single-transaction \
--routines \
--triggers \
--events \
--quick \
--lock-tables=false \
"$DB_NAME" | gzip > "$BACKUP_FILE"; then
SIZE=$(du -h "$BACKUP_FILE" | cut -f1)
echo "[$(date)] Backup completed: $BACKUP_FILE ($SIZE)" >> "$LOG_FILE"
else
echo "[$(date)] ERROR: Backup failed for $DB_NAME" >> "$LOG_FILE"
exit 1
fi
# Remove backups older than retention period
find "$BACKUP_DIR" -name "${DB_NAME}_*.sql.gz" -mtime +$RETENTION_DAYS -delete
REMAINING=$(ls -1 "$BACKUP_DIR"/${DB_NAME}_*.sql.gz 2>/dev/null | wc -l)
echo "[$(date)] Rotation complete. $REMAINING backups on disk." >> "$LOG_FILE"Store credentials in ~/.my.cnf with restricted permissions:
# ~/.my.cnf [mysqldump] user=backup_user password=your_secure_password # Lock down permissions chmod 600 ~/.my.cnf
Important flags explained
--single-transaction takes a consistent snapshot for InnoDB tables without locking them. --quick writes rows directly to stdout instead of buffering in memory, which is critical for large databases. --routines and --triggers include stored procedures and triggers that are often forgotten.
Step 2: PostgreSQL Backup Script
PostgreSQL uses pg_dump with a similar pattern. For credentials, use a .pgpass file:
#!/bin/bash
# /opt/scripts/backup-postgres.sh
set -euo pipefail
DB_NAME="production_db"
DB_HOST="localhost"
DB_USER="backup_user"
BACKUP_DIR="/var/backups/postgres"
RETENTION_DAYS=30
DATE=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="$BACKUP_DIR/${DB_NAME}_${DATE}.dump"
LOG_FILE="/var/log/backup-postgres.log"
mkdir -p "$BACKUP_DIR"
echo "[$(date)] Starting backup of $DB_NAME" >> "$LOG_FILE"
# Use custom format for parallel restore capability
if pg_dump \
-h "$DB_HOST" \
-U "$DB_USER" \
-Fc \
--no-owner \
--no-acl \
"$DB_NAME" > "$BACKUP_FILE"; then
SIZE=$(du -h "$BACKUP_FILE" | cut -f1)
echo "[$(date)] Backup completed: $BACKUP_FILE ($SIZE)" >> "$LOG_FILE"
else
echo "[$(date)] ERROR: Backup failed for $DB_NAME" >> "$LOG_FILE"
exit 1
fi
# Rotate old backups
find "$BACKUP_DIR" -name "${DB_NAME}_*.dump" -mtime +$RETENTION_DAYS -delete# ~/.pgpass (hostname:port:database:username:password) localhost:5432:production_db:backup_user:your_secure_password chmod 600 ~/.pgpass
The -Fc flag produces a custom-format dump. It is compressed by default and supports parallel restore with pg_restore -j 4, which can dramatically speed up recovery on large databases.
Step 3: Schedule with Cron
Make the scripts executable and add them to crontab:
chmod +x /opt/scripts/backup-mysql.sh chmod +x /opt/scripts/backup-postgres.sh
# crontab -e # Daily backup at 3:00 AM (low-traffic window) 0 3 * * * /opt/scripts/backup-mysql.sh # Hourly backup for critical databases 0 * * * * /opt/scripts/backup-postgres.sh # Weekly full backup on Sunday at 2:00 AM 0 2 * * 0 /opt/scripts/backup-mysql.sh --full
Not sure about the syntax? Use the Cron Expression Generator to build and preview your schedule before adding it to your crontab.
Step 4: Backup Rotation Strategy
Keeping every backup forever is wasteful. Keeping only the latest one is dangerous. A grandfather-father-son (GFS) rotation balances storage cost with recovery flexibility:
| Tier | Frequency | Retention | Cron Expression |
|---|---|---|---|
| Hourly | Every hour | 48 hours | 0 * * * * |
| Daily | Once per day | 30 days | 0 3 * * * |
| Weekly | Every Sunday | 12 weeks | 0 2 * * 0 |
| Monthly | 1st of month | 12 months | 0 1 1 * * |
Implement this by tagging backups with their tier and running separate cleanup logic for each:
# In your backup script, determine the tier:
DAY_OF_WEEK=$(date +%u) # 1=Monday, 7=Sunday
DAY_OF_MONTH=$(date +%d)
if [ "$DAY_OF_MONTH" = "01" ]; then
TIER="monthly"
RETENTION=365
elif [ "$DAY_OF_WEEK" = "7" ]; then
TIER="weekly"
RETENTION=84
else
TIER="daily"
RETENTION=30
fi
BACKUP_FILE="$BACKUP_DIR/${DB_NAME}_${TIER}_${DATE}.sql.gz"
# Rotate per tier
find "$BACKUP_DIR" -name "${DB_NAME}_${TIER}_*.sql.gz" \
-mtime +$RETENTION -deleteStep 5: Offsite Storage (S3 / GCS)
Local backups protect against data corruption but not against disk failures, server compromise, or datacenter issues. Upload backups to object storage for true disaster recovery:
# Add to the end of your backup script:
S3_BUCKET="s3://company-backups/databases"
# Upload to S3
if aws s3 cp "$BACKUP_FILE" "$S3_BUCKET/$(basename $BACKUP_FILE)" \
--storage-class STANDARD_IA; then
echo "[$(date)] Uploaded to S3: $S3_BUCKET/$(basename $BACKUP_FILE)" >> "$LOG_FILE"
else
echo "[$(date)] ERROR: S3 upload failed" >> "$LOG_FILE"
# Alert on upload failure (critical!)
curl -s -X POST "https://hooks.slack.com/services/YOUR/WEBHOOK/URL" \
-H 'Content-type: application/json' \
-d '{"text":"ALERT: Database backup S3 upload failed for '"$DB_NAME"'"}'
exit 1
fiFor Google Cloud Storage, replace aws s3 cp with gsutil cp. The pattern is identical.
Use STANDARD_IA (Infrequent Access) or GLACIER storage classes to reduce costs. Daily backups of a 10 GB database cost roughly $1-2/month on S3 Standard-IA, versus $7-8/month on Standard.
Configure S3 lifecycle policies to automatically transition older backups to cheaper storage tiers and delete them after your retention period:
{
"Rules": [
{
"ID": "Transition to Glacier after 30 days",
"Filter": { "Prefix": "databases/" },
"Status": "Enabled",
"Transitions": [
{ "Days": 30, "StorageClass": "GLACIER" }
],
"Expiration": { "Days": 365 }
}
]
}Step 6: Monitoring Backup Success
A backup that fails silently is worse than no backup at all, because it gives you false confidence. There are several ways to monitor backup health:
Dead Man's Switch
A dead man's switch alerts you when a backup does not run. Add a ping at the end of your backup script that only fires on success:
# At the very end of a successful backup: curl -s "https://cronjobpro.com/api/heartbeat/your-job-id" > /dev/null
If the heartbeat is not received within the expected window, CronJobPro sends you an alert via email, Slack, or webhook.
HTTP-Triggered Backups
Instead of relying on the server's own crontab, expose your backup as an HTTP endpoint and let CronJobPro trigger it. This way, the scheduler monitors the response code, response time, and body content. If the endpoint returns a 500 or times out, the job retries automatically and sends an alert:
# /var/www/app/api/backup.php (simplified)
<?php
header('Content-Type: application/json');
// Verify the request comes from CronJobPro
$secret = $_SERVER['HTTP_X_API_KEY'] ?? '';
if ($secret !== getenv('BACKUP_API_KEY')) {
http_response_code(401);
echo json_encode(['error' => 'Unauthorized']);
exit;
}
$output = [];
$returnCode = 0;
exec('/opt/scripts/backup-mysql.sh 2>&1', $output, $returnCode);
if ($returnCode === 0) {
echo json_encode([
'status' => 'ok',
'message' => implode("\n", $output)
]);
} else {
http_response_code(500);
echo json_encode([
'status' => 'error',
'message' => implode("\n", $output)
]);
}Backup Size Monitoring
A backup file that is unusually small usually means it failed partway through or captured an empty database. Add a size check:
# After creating the backup:
MIN_SIZE_KB=1000 # Minimum expected size in KB
ACTUAL_SIZE=$(du -k "$BACKUP_FILE" | cut -f1)
if [ "$ACTUAL_SIZE" -lt "$MIN_SIZE_KB" ]; then
echo "[$(date)] WARNING: Backup file is only ${ACTUAL_SIZE}KB (minimum: ${MIN_SIZE_KB}KB)" >> "$LOG_FILE"
# Send alert...
exit 1
fiStep 7: Automated Restore Testing
A backup is only as good as your ability to restore it. Schedule a weekly restore test that loads the latest backup into a test database and runs basic integrity checks:
#!/bin/bash
# /opt/scripts/test-restore.sh
set -euo pipefail
BACKUP_DIR="/var/backups/mysql"
TEST_DB="restore_test"
LOG_FILE="/var/log/backup-restore-test.log"
# Find the most recent backup
LATEST=$(ls -t "$BACKUP_DIR"/production_db_daily_*.sql.gz 2>/dev/null | head -1)
if [ -z "$LATEST" ]; then
echo "[$(date)] ERROR: No backup files found" >> "$LOG_FILE"
exit 1
fi
echo "[$(date)] Testing restore of: $LATEST" >> "$LOG_FILE"
# Drop and recreate test database
mysql -e "DROP DATABASE IF EXISTS $TEST_DB; CREATE DATABASE $TEST_DB;"
# Restore
if gunzip -c "$LATEST" | mysql "$TEST_DB"; then
# Run integrity checks
TABLE_COUNT=$(mysql -N -e "SELECT COUNT(*) FROM information_schema.tables WHERE table_schema='$TEST_DB';")
ROW_SAMPLE=$(mysql -N -e "SELECT COUNT(*) FROM $TEST_DB.users;" 2>/dev/null || echo "0")
echo "[$(date)] Restore OK. Tables: $TABLE_COUNT, Users: $ROW_SAMPLE" >> "$LOG_FILE"
# Cleanup
mysql -e "DROP DATABASE $TEST_DB;"
else
echo "[$(date)] ERROR: Restore FAILED for $LATEST" >> "$LOG_FILE"
exit 1
fi# Schedule weekly restore test (Sunday at 5:00 AM, after the backup at 2:00 AM) 0 5 * * 0 /opt/scripts/test-restore.sh
Critical reminder
Never run restore tests against your production database. Always restore to a separate test database. And never use production credentials for the test database user -- create a dedicated user with limited permissions.
The Complete Pipeline
Here is everything tied together as a crontab:
# Database Backup Pipeline # Server timezone: UTC # All times in UTC # Hourly backup (retain 48h) 0 * * * * /opt/scripts/backup-mysql.sh hourly 2 # Daily backup at 3:00 AM UTC (retain 30 days) 0 3 * * * /opt/scripts/backup-mysql.sh daily 30 # Weekly full backup Sunday 2:00 AM UTC (retain 12 weeks) 0 2 * * 0 /opt/scripts/backup-mysql.sh weekly 84 # Monthly backup 1st of month at 1:00 AM UTC (retain 12 months) 0 1 1 * * /opt/scripts/backup-mysql.sh monthly 365 # Weekly restore test Sunday 5:00 AM UTC 0 5 * * 0 /opt/scripts/test-restore.sh # Daily S3 sync at 4:00 AM UTC 0 4 * * * aws s3 sync /var/backups/mysql/ s3://company-backups/mysql/ --storage-class STANDARD_IA
Key Takeaways
- Use
--single-transactionfor mysqldump and-Fcfor pg_dump to get consistent snapshots without locking tables. - Store credentials in
.my.cnfor.pgpasswith 600 permissions, never in the crontab or script. - Implement GFS rotation (hourly, daily, weekly, monthly) to balance storage cost and recovery flexibility.
- Upload backups to S3 or GCS for offsite disaster recovery. Use Infrequent Access or Glacier tiers to keep costs under $2/month.
- Monitor backup success with heartbeats, size checks, and automated restore tests. An unmonitored backup is a ticking time bomb.
Related Articles
Monitor your backup cron jobs with heartbeats and alerts.
Docker Cron Jobs GuideRun backup scripts inside Docker containers.
Kubernetes CronJob GuideSchedule backup Jobs in Kubernetes clusters.
Cron Job Not Running? How to Fix ItTroubleshoot backup cron jobs that stop working.
Monitor your backups with CronJobPro
Trigger backup scripts via HTTP, get automatic retries on failure, and receive instant alerts when something goes wrong. Free for up to 5 jobs.