[ Back to log ]

Automated MySQL/MariaDB backups to Cloudflare R2

Database

You need database backups, and storing them on the same server as your database is useless if the server dies. Cloudflare R2 is a great target because it’s cheap and S3-compatible.

Here’s a script that dumps your databases, zips them up, ships them to R2, and deletes the old ones.

Step 0: Check your MySQL root password

First, make sure you actually know your MySQL root password. If you don’t, reset it by logging in as the system root user:

sudo mysql

Then set a new password:

ALTER USER 'root'@'localhost' IDENTIFIED BY 'NewSecurePassword123!';
GRANT ALL PRIVILEGES ON *.* TO 'root'@'localhost' WITH GRANT OPTION;
FLUSH PRIVILEGES;
exit;

Step 1: Install the AWS CLI

Since R2 uses the S3 API, we can just use the standard AWS CLI to upload files.

Install the basics:

Ubuntu/Debian:

sudo apt update
sudo apt install curl unzip default-mysql-client -y

CentOS/RHEL:

sudo yum install curl unzip mysql -y

Then download and install the AWS CLI:

curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
rm -rf awscliv2.zip aws/

Step 2: Configure credentials

Point the AWS CLI at your Cloudflare account:

aws configure

It will ask for four things:

  • AWS Access Key ID: Your R2 Access Key ID
  • AWS Secret Access Key: Your R2 Secret Access Key
  • Default region name: auto
  • Default output format: json

Step 3: Create the backup script

Create the bash script (this opens the nano editor):

sudo nano /usr/local/bin/db-backup.sh

Paste this code. Swap out the variables in the Configuration section with your actual details.

#!/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

# ==========================================
# Configuration
# ==========================================

DB_USER="your_database_user"
DB_PASS="your_database_password"
BACKUP_DIR="/tmp/mysql_backups"

R2_ACCOUNT_ID="your_cloudflare_account_id"
R2_BUCKET="your-bucket-name"
R2_ENDPOINT="https://${R2_ACCOUNT_ID}.r2.cloudflarestorage.com"
MAX_BACKUPS=5

DATE=$(date +"%Y-%m-%d_%H-%M-%S")
PREFIX="alldbs_archive_"
FILE_NAME="${PREFIX}${DATE}.tar.gz"
FILE_PATH="${BACKUP_DIR}/${FILE_NAME}"
STAGE_DIR="${BACKUP_DIR}/staging_${DATE}"

# ==========================================
# Execution
# ==========================================

mkdir -p "$STAGE_DIR"
echo "Fetching databases..."

DATABASES=$(mysql -u "$DB_USER" -p"$DB_PASS" -e "SHOW DATABASES;" | grep -Ev "^(Database|information_schema|performance_schema|sys)$")

for DB in $DATABASES; do
    echo " -> Dumping $DB..."
    if ! mysqldump -u "$DB_USER" -p"$DB_PASS" --single-transaction --routines --triggers "$DB" 2>/dev/null > "${STAGE_DIR}/${DB}.sql"; then
        echo "Error: Failed to dump $DB." >&2
        rm -rf "$STAGE_DIR"
        exit 1
    fi
done

echo "Archiving..."
if tar -czf "$FILE_PATH" -C "$STAGE_DIR" .; then
    echo "Archive created."
else
    echo "Error: Archiving failed." >&2
    rm -rf "$STAGE_DIR"
    exit 1
fi

echo "Uploading to R2..."
if aws s3 cp "$FILE_PATH" "s3://${R2_BUCKET}/" --endpoint-url "$R2_ENDPOINT"; then
    echo "Upload successful."
    rm -rf "$STAGE_DIR" "$FILE_PATH"
else
    echo "Error: Upload failed." >&2
    rm -rf "$STAGE_DIR"
    exit 2
fi

# ==========================================
# Retention Policy
# ==========================================

BACKUP_LIST=$(aws s3 ls "s3://${R2_BUCKET}/" --endpoint-url "$R2_ENDPOINT" | awk '{print $4}' | grep "^${PREFIX}" | sort)

if [ -z "$BACKUP_LIST" ]; then
    BACKUP_COUNT=0
else
    BACKUP_COUNT=$(echo "$BACKUP_LIST" | wc -l)
fi

if [ "$BACKUP_COUNT" -gt "$MAX_BACKUPS" ]; then
    DELETE_COUNT=$((BACKUP_COUNT - MAX_BACKUPS))
    FILES_TO_DELETE=$(echo "$BACKUP_LIST" | head -n "$DELETE_COUNT")
    
    for FILE in $FILES_TO_DELETE; do
        aws s3 rm "s3://${R2_BUCKET}/${FILE}" --endpoint-url "$R2_ENDPOINT"
    done
fi

Save and exit (Ctrl+O, Enter, Ctrl+X).

Step 4: Make it executable

Allow the system to run it:

sudo chmod +x /usr/local/bin/db-backup.sh

(You should probably run /usr/local/bin/db-backup.sh manually once just to make sure it works).

Step 5: Schedule with cron

Tell cron to run the script every hour.

Open the cron table:

crontab -e

Add this to the bottom:

0 * * * * /usr/local/bin/db-backup.sh >> /var/log/db-backup.log 2>&1

Save and exit (Ctrl+O, Enter, Ctrl+X for nano, or :wq for vim). Your server will now handle backups automatically.