How to implement MYSQL backups: Full and Incremental backup using Binary Logs ( bin-logging )

Trevor
8 min readSep 6, 2022

Doing manual backups is a rare case when running a MYSQL database on a virtual machine because cloud providers (in my case: Google Cloud) have implemented a number of backup options, such as auto-scheduled snapshots of the compute engine disk. This is enough for the case of just backing up your database!

What if you want to handle backing up yourself? for instance, you want to have regular database backups and keep them on your hard disk. This is what I wanted and this is how I did it.

Terms

A full backup contains all the data of the database, therefore the very first backup should be a full backup of the database.
An incremental backup contains all the changes that have occurred to the database since the last backup. Database changes are recorded/written on a binary log which we copy in the correct sequence to recover all the changes.

Importance of backups

  1. Having periodic backups enables us to rewind the clock by reloading the previous database. If something breaks or fails, this acts as a lifeline for the system.
  2. Data versioning. This is when different versions of the database are available to go back to, therefore critical changes that later prove to break the system can be undone by restoring older versions without the problem.

Let’s dive in 😉

We will cover the following:

  • Setting Up a virtual machine
  • Writing full and incremental backup scripts
  • Configuring backup Cron Job
  • Writing backups download scripts into your local machine
  • Restore the backups

Prerequisites for this tutorial

This tutorial will be a hands-on demonstration therefore expected to be equipped with the following:

  • Google Cloud Platform (GCP) account or any other cloud provider where you can create Ubuntu 20.04 LTS virtual machine.
    You can get a free trial account on GCP using this link ( free trial )
  • Basic knowledge of Linux terminal commands
  • Basic understanding of MYSQL commands

Setting Up a virtual machine (Server)

Log into your Google Cloud Platform (GCP) console and select the project you wish to use.
Using the left navigation bar, select Compute Engine service, then vm instance option which takes you to a page that appears like the screenshot below.

Preview of the google cloud console

Provide the name for your vm instance.
Use the Boot Disk option to change the operating system of the vm to our preference — Ubuntu 20.04 LTS.
Using the Firewall option, check the Allow HTTP traffic to enable ssh into the vm using the gcloud CLI.
Press the Create button and your vm will be up and running in a few minutes.

google cloud console showing the created instance

The above settings are enough to set up a basic vm to use in this tutorial, however, if you wish to have a detailed blog on creating a vm, check out his blog on the ATA Learning site:
https://adamtheautomator.com/google-virtual-machine/

Install and Configure MySQL
Now that you have a server up and running, ssh into it and then let's install the MySQL database system using the following commands:

sudo apt install mysql-server

When the installation is complete, test if you’re able to log in to the MySQL console by typing:

$ sudo mysql

This will connect to the MySQL server as the administrative database user root. Your terminal output should look like the screenshot below:

Create a test database

Now that you have MySQL running, create a test database to use in the following section. use the following command;

mysql> CREATE DATABASE testdb;

Add test data to the database. Let's create a table called posts.

mysql> CREATE TABLE `posts` (
-> `id` INT(11) NOT NULL AUTO_INCREMENT,
-> `author_id` INT(11) NOT NULL,
-> `title` VARCHAR(255) NOT NULL COLLATE 'utf8_unicode_ci',
-> `description` VARCHAR(500) NOT NULL COLLATE 'utf8_unicode_ci',
-> `content` TEXT NOT NULL COLLATE 'utf8_unicode_ci',
-> `date` DATE NOT NULL,
-> PRIMARY KEY (`id`)
-> )
-> COLLATE='utf8_unicode_ci'
-> ENGINE=InnoDB;

Enable binary logging

As discussed said earlier, Database changes (incremental) are recorded/written on a binary log, therefore will need to enable binary logging. By default it is disabled. It's a matter of uncommenting a few lines.
Use your preferred editor, ( I will use nano) to open a mysqld.cnf file

$ sudo nano /etc/mysql/mysql.conf.d/mysqld.cnf

When you open the file, scroll to the bottom and uncomment the key: value pairs as you see below: Just to point out a few ;

  • log_bin: defines, the directory where the logs are dumped and the format of the logs.
  • binlog_do_db: the database you what to enable logging for. To enable logging for all the databases, leave the line commented.
mysqld.cnf file opened with nano

Restart mysql systemd service to enable binary logging.

$ sudo systemctl restart mysql.service 

Writing Full and incremental backup scripts

To remind you as discussed before, each incremental copy contains changes that have been created since the last backup, but the very first backup must be a full backup.

Now that you have the database configured and with test data, first create a full backup for the testdb using MySQL's inbuilt command: mysqldump with parameters;
delete-master-logs (deletes old binary log files ) and — flush-log ( initializes writing a new binary log file )

It is also a good idea to compress the output especially if the database size is very large, simply pipe the output to the gzip utility.

Since we hope to be running this regularly (at least once a week ) using a cron job, create a bash script with the mysql dump command as below. Will save the script as mysql_full_backup.sh
First, create a directory where all the backups will be stored.

$ mkdir dbbackup
#!/bin/bash 

mysqldump -u<user> -p<userpassword> --flush-logs --delete-master-logs testdb | gzip > /var/backups/mysql/$(date +%d-%m-%Y_%H-%M-%S)-full.gz

as a root user:

#!/bin/bash

mysqldump --flush-logs --delete-master-logs testdb | gzip > dbbackup/$(date +%d-%m-%Y_%H-%M-%S)-full.sql.gz

Now make the bash script executable using the command below;

$ sudo chmod +x mysql_full_backup.sh 

Now run the bash script to create the full backup and ensure the zipped file containing the backup has been created.

$ sudo ./mysql_full_backup.sh
zipped file containing the backup that has been created

Create incremental backups from accumulated binary logs

Copy the script below and save it as mysql_inc_backup.sh
Let me point out the command used in the script below;

  • FLUSH BINARY LOGS: allows to safely copy all the accumulated binary log files. It allows copying the active binary log file (to which data is being written right now) by starting to write to a new binary log file.
  • PURGE BINARY LOGS: deletes the old binary so that during the next copying they do not duplicate the already backed-up data.

NB: I assume you’ll run the script as a root user. To run it as another user you’ve created, remember to add username and password after the mysql command ie; sudo mysql -uadmin -padmin1234 -E …

#!/bin/bash

#path to directory with binary log files
binlogs_path=/var/log/mysql/

#path to backup storage directory
backup_folder=dbbackup/increment/

#start writing to new binary log file
sudo mysql -E --execute='FLUSH BINARY LOGS;' mysql

#get list of binary log files
binlogs=$(sudo mysql -E --execute='SHOW BINARY LOGS;' mysql | grep Log_name | sed -e 's/Log_name://g' -e 's/^[[:space:]]*//' -e 's/[[:space:]]*$//')

#get list of binary log for backup (all but the last one)
binlogs_without_Last=`echo "${binlogs}" | head -n -1`

#get the last active binary log file (which you do not have to copy)
binlog_Last=`echo "${binlogs}" | tail -n -1`

#form full path to binary log files
binlogs_fullPath=`echo "${binlogs_without_Last}" | xargs -I % echo $binlogs_path%`

#compress binary logs into archive
zip $backup_folder/$(date +%d-%m-%Y_%H-%M-%S)-inc.zip $binlogs_fullPath

#delete saved binary log files
echo $binlog_Last | xargs -I % sudo mysql -E --execute='PURGE BINARY LOGS TO "%";' mysql

Now make the bash script executable using the command below;

$ sudo chmod +x mysql_inc_backup.sh

Now run the bash script to create the full backup and ensure the zipped file containing the backup has been created.

$ sudo ./mysql_inc_backup.sh
Both full and incremental backup zip

Configuring backup Cron Job

To be able to create periodic backups means you will be logging into the server daily and manually running the script above. Doing such same task over and over again can be quite a chore, this is when a cron job becomes important. Scheduling cron jobs let users automate tasks on a Unix-like operating system. This saves precious time, allowing users to focus on other essential tasks.
Run the first command provided below to open a text editor in the terminal as a root user. Use sudo so that the commands in the sudo’s cron table are executed as a root user.

$ sudo crontab -e

Copy and paste the scripts below in the text, and exit to save.
The cron jobs will be running the backup scripts weekly — for full backups, and daily — for incremental backups.

# Do full backup weekly
@weekly sudo bash /scripts/mysql_full_backup.sh

# Do Incremental daily
@daily sudo bash /scripts/mysql_inc_backup.sh

Writing Backup download scripts.

Now that you have your backups you may need to download them to your local computer. Use scp — a Linux inbuilt command to copy the zipped backups to your local machine backup directory ( /dbbackup ).

$ mkdir dbbackup
$ gcloud compute scp <VM_NAME>:dbbackup/* dbbackup

Unzip the downloaded file to check the contents.
For full backup file, you should be able to see all the data.
For incremental file, it comes as a bin log file which you can convert to a SQL to view the data. Remember it will be only the new data that was added.

Use mysqlbinlog utility which converts the binary log files containing binary data to SQL expressions. Below is the command below.

mysqlbinlog --base64-output=DECODE-ROWS -v mysql-bin.000001 > inc_backup.sql;

NB: Running the above command on the local machine may throw some errors. Therefore, the other option is to do it on the remote server and then you download the converted sql file.

Restore Backups from the scripts.

To restore full backups, use the command below.

$ gunzip < 07-09-2022_13-15-58-full.gz | sudo mysql -u root

Restore Incremental backups

Unzip all archives with binary log files into a separate directory and go to it.

$ unzip \*.zip -d logs 
$ cd logs

Convert the binary log files to SQL expressions as done in previous section.
The order in which binary log files are indicated is important. Indicate them in ascending order.

$ mysqlbinlog mysql-bin.000015 | sudo mysql -uadmin -padmin1234

Use the command below, if you have a lot of incremental backups:

mysqlbinlog $(ls) | sudo mysql -uadmin -padmin1234

Conclusion

To summarise below is what we have covered.

  1. Setting Up a virtual machine on Google Cloud. Installed and configured MySQL database.
  2. Writing full and incremental backup scripts
  3. Configuring backup Cron Job for auto backups
  4. Writing backups download scripts into your local machine
  5. Restore the backups

--

--

Trevor

👨‍💻 Cloud Geek | 👨‍🏫Educator #GCP | Mentor @gdsc_uganda | Facilitator @GDGCloudKampala | Ex-Lead #DeveloperStudentClubs | #Mindfulness 🧘🏽‍♂️