MYSQL - Backup & Recovery

Carsten Müller
Backups are very important for every application, especially if a lot of data is stored in your database. For a website with few updates it is not so important to do backups regularly, you can just take the backup of last week for restoring the site and if there was just one or two updates, you can add them manually afterwards. But if you run a community site with user generated content and a lot of input the topic backup & recovery becomes a lot more important but also complex. If the last backup is from last night you have to consider all the updates that were made in the meantime. Because you don’t know what the users have entered, it is impossible to add these changes afterwards manually. That is why you need a backup strategy that also considers the storing of all updates in the time between two full backups.
There are four methods for backup and recovery a MySQL database. All other methods are based on these 4 methods. These are logical and physically backup methods.
Physical backups
Storing the binary MySQL files
With this method the real MySQL database files for the tables, in which all data is physically stored on the hard disk, are copied to a save location. If a backup is needed the files can be copied back to the server.
service mysql stop;
cp -R /var/lib/mysql/database_name target
service mysql start;
service mysql stop;
cp -R /path/to/backup/database_name /var/lib/mysql/database_name
service mysql start;
easy for backup
multiple files, if one is broken, hopefully just this table is lost, not the whole database
takes a lot of diskspace, all the indexes etc. are copied too
The database has to switched off for a certain time during the backup
Restoring can become a little complex
you need special permissions on the operation system
LVM snapshot
Linux provides a Logical Volume Manager (LVM) ( and is a layer to manage the filesystem. The LVM provides the ability to create snapshots of any logical volume. So, you create a backup of the physical volume which can be easily used for recovering in the future. This is one of the best and easiest solutions, it is very fast, very easy and the potntial error level is very low. You don't have to set your database or application offline, there will be no locks on the tables and you get a stable snapshot of the current status.
# create the snapshot
lvcreate -l100%FREE -s -n mysql-backup /data/databases
# Create a mount-point and mount the volume/snapshot
mkdir -p /mnt/snapshot
mount /data/databases/mysql-backup /mnt/snapshot
# Do the backup and copy it to a separate system
tar -cf /tmp/dbbackup_YYYMMDD_H:i:s.tar /mnt/snapshot
cp /tmp/dbbackup_YYYMMDD_H:i:s.tar ip:/path/to/backups/
# Now remove the mount and the snapshot
umount /mnt/snapshot
lvremove /data/databases/mysql-backup
# copy back the backup to your server
cp ip:/path/to/backups/dbbackup_YYYMMDD_His.tar  /tmp/
# stop the database
service mysql stop; 
# remove the old database files
rm -R /var/lib/mysql/database_name 
# copy the backup
cp /tmp/dbbackup_YYYMMDD_His.tar .
# unpack the files
tar xvf dbbackup_YYYMMDD_His.tar
restart the database
service mysql start;
very fast
no stop of the database, no lock on tables
LVM needed
maybe root access for the operation system needed
Logical backups
A mysql dump is a common strategy for backups. This is a logical backup, means the structure and the content of the database are exported into a special file in a special format. This is done by using the MySQL syntax and stores all relevant information’s that are needed to rebuild the database. Normally there is a statement to create the database again, statements to rebuild the tables and their structure and then statements to import the data into the tables itself. All these information’s are stored in one file and this can be copied to a save location. When a backup is needed the file can be imported and the database will be restored with the information’s stored in the file.
# create the dump 
mysqldump --add-drop-database -u Username -pPassword database_name > dump_database_name_YYYMMDD_His.mysql
# drop the old table and insert the backup dump data
mysql -u Username -pPassword database_name < dump_database_name_YYYMMDD_His.mysql
Very easy
Can be done by users without special permissions on the operation system, esp. root permissions
The indexes are not stored, so this backup does not uses as much disk space as a physical file backup
You can have a look into the backup and also search in it, data manipulation is also possible if something has to be changed
Slower than a physical backup
Slower in recovery, because everything has to be imported first and then the indexes have to be built again
One file, if this is broken the backup is not possible
--lock-all-tables: locks all tables during creation of the dump, so the application cannot access them, avoids data inconsistences!
--single-transaction: the whole dump is executed as an transaction, so the application can still access and write the database, the dump is made via transaction
--master-data: set the location of the master in a database replication, so the slave knows the position and where to start with the replication
--add-drop-database: add a DROP DATABASE statement before each CREATE DATABASE statement
There are special tools to create a innobackup, see also here.This is a special case for databases where the storage engine innodb is used. As innodb became the default storage engine and mysisam will be removed in the future, this is also a very common possibility to create a database backup. It is nearly the same as a normal MySQL dump but also considers the special possibilities of innodb like foreign key constraints etc. There is the MySQL Enterprise Backup (MEB) ( for creating innobackups, but it costs money. 
There is also an open source tool from percona named xtrabackup - There is a free version available but also a enterprise edition which also costs money. As percona offers a lot of useful tools around mysql, this is maybe also a good choice for your MySQL toolkit. There are also other tools from percona which help to improve you daily live with MySQL.
Master-Slave replication for backups and avoiding downtimes
A special hint at this point: if possible, use a master-slave replication and use the slave for building the backups, so the main system if not affected and performance for the application is not affected. It is also a good setup to avoid long downtimes of your application. If one server crashes you can switch to the other with your application and it will stay online. In the meantime you can repair the broken system and then restore the old setup. So, if the master breaks, you can switch to the salve and it becomes master. If the slave fails, just the read requests of your application have to be routed to the master.
Fromdual Bman offers the tool fromdual_bman which is a wrapper for seven different backup methods (the ones mentioned above and combinations of them).
Backup location
A backup is always better than no backup. But if the backup is stored on the same logical disk on the server where your website also runs on, you mabe get in trouble. If the disk crashes your website is offline and you also loose your backup. In this case it is impossible to restore your website on another server and bring it back online. So, always save your backups on another logical volume or on another server. If the data is very important also consider to save your backups on multiple locations maybe also in other data centers. In a case of fire or something similar your backup can be fetched from somewhere else and recovery can run in another data center.
Uuuups Queries - Accident on database
So called uuuups queries are queries where accidentially a wrong query was executed in a production system. This mostly happens because somebody executes the query manually by accident. There are multiple reasons why this can happen, for example mixing up the consoles etc.
Time is the key, so you have to act immediately!
Stop the database and your application immediately! Set your application to maintenance mode!
Two possibilities: 
1. No database replication
You can only reimport the last backup, whenever it was made. Hopefully it was made not long time ago, maybe last night. By this you loose all changes between your last backup and the time of your uuuups query. Or, if you know the changes that were made in this time, you can fix the changes manually or by writing a script which does the changes for you.
2. A running database replication
You can reimport the last backup. By using a replication you automatically get the binary logs where MySQL writes the databases changes to be executed on the slave server. You can use these binary logs to create a point in time recovery, that means you can recover everything until the execution of the wrong query. After the import of the dump you can execute the mysql binary logs containing either the statements (statement based replication) or the changed rows (row based replication). Because all the changes are stored in these files you get all the changes that were made in the time of your last backup and the time the uuups query was executed. Do not forget to avoid the execution of the uuups query again, it is also in the binary logs! You can edit the binary logs by using the myqlbinlog tool ( and delete the uuups query from the log. This has to be done on both servers, on the master and on the slave. But when one of the servers (use the master first) is recovered you can enable you application again by just using this first server. After that you can recover the slave server and restart the replication. If you are an experienced user you can also start the recovering of the master and the slave server together, so both systems are nearly back at the same time. But so not mix up the systems, else you have to start from the beginning and your application stays offline.
We hope these hints can help you in your daily life with MySQL. There are also other posts about MySQL. Because in this post there was a lot about database replication mentioned, the post about MySQL - setup is also interesting for you.

About Carsten Müller

Carsten Müller has been working for Cocomore since 2005. As a Senior Manager Software Development he conceives and creates complex and comprehensive Websites (and does not repair Computers ;)). Before he had been working for SevenOne Intermedia, a subsidiary of ProSiebenSat1 group.

Describing Carsten in three words: Nordic by nature.