Everybody talks about doing it, but very few developers follow through - you guessed it, data backups! Perhaps it's a matter of getting burnt at least once, or having proper company policies in place (aka have a boss breath down your neck about it), but the practice remains surprisingly elusive. In either case, over the weekend I had to research how to migrate a sizable MySQL database, and hence below are a few tips from the trenches. In the end, the entire process is surprisingly simple.
MySQL tools & strategies
As popular as MySQL is there aren't many third party backup tools that will do a proper backup of your data. If you want to avoid data corruption, half-committed transactions, and many other problems, your best to go with one of the bundled backup utilities: mysqldump, mysqlhotcopy, mysqlsnapshot, and if you have the bucks, ibbackup. To select the proper tool for the job, first determine the answers to the following questions: online vs. offline backup, and data dump vs. raw.
Online vs. Offline backups
Online backups are often the preferred method as database downtime may be an unacceptable option. Having said that, offline backups are usually faster and less error prone as we do not have to worry about running transactions, table locks, orphaned processes, and other consistency problems. If you can afford to have a brief period of downtime, or if you're lucky enough to have master-slave replication, offline is the way to go.
Data dump vs. Raw backups
A data dump results in a sequence of SQL statement which can be ran against any database to recreate the database structure and the data itself. mysqldump is the premier tool in this space, and it can be used on any table type locally and even over the network. However, data dumps incur a lot of overhead with extra SQL syntax, result in larger data files, are much more CPU intensive, and most importantly, they require a full index rebuild when the data is being restored!
Arguably the most efficient way to backup your database is through a raw snapshot of the MySQL files as they exist on disk. Because we are skipping all the conversion steps, the process is much more efficient than dumps. To perform a proper backup of a MyISAM table, you will have to copy the data and the index files; for InnoDB you will need to backup the entire tablespace and the associated transaction logs.
mysqldump / mysqlhotcopy / mysqlsnapshot / ibbackup
mysqldump - (online, dump) - most commonly used tool in our toolkit. It will perform a full data dump from an online database by locking the tables and writing a hefty file to disk or a network location. It is ideally suited for small databases as the process is not very efficient.
> mysqldump
# typical mysql dump backup and restore usage
mysqldump -u root -pPassword -x --all-databases > db_dump.sql
mysql -u root -pPassword < db_dump.sql
# dump into 'backup' folder (local machine), into two text files
mysqldump -T backup --fields-terminated-by=',' database-name -u root -pPassword
# compress the dumped data on the fly
mysqldump -u root -pPassword --all-databases | bzip2 -c > db_dump.bz2
mysqlhotcopy - (online, raw) - will perform a full raw backup of any database consisting of ISAM or MyISAM tables. It operates by acquiring a read lock on all tables, copying them, and then releasing the lock.
> mysqlhotcopy
# perform an online backup into /backup/location
mysqlhotcopy -u root -p password database_name /backup/location
mysqlsnapshot - (online, raw) - a great tool to obtain a full database snapshot of any MySQL database without taking it offline. You can configure it to compress the data, and/or provide separate tar files for each database. The only downside: MyISAM only as well.
> mysqlsnapshot
# save a full database snapshot of an online database into /backup/location
mysqlsnapshot -u root -pPassword -s /backup/location
# restore a snapshot
tar -xvf /backup/location/db.tar
ibbackup - (online, raw) - will perform an online backup of InnoDB and MyISAM tables on any MySQL database. A great tool, but it comes with a price tag. Then again, if you're a heavy InnoDB user, it may well be worth the price.
> ibbackup
# perform online backup of MyISAM / InnoDB tables
ibbackup /etc/my.cnf /etc/ibbackup.cnf
# restore recent backup (as configured in ibbackup.cnf)
ibbackup --restore /etc/ibbackup.cnf
cp, scp, nc - (offline, raw) - if you can afford to take your database offline, raw backups are as simple as doing a copy, or a remote transfer to a different machine. It may sound crude, but it is arguably the safest way to get a snapshot of your data!
Network backups: Netcat & mysqldump
For full safety you should backup your data on multiple drives, and even better, on multiple computers. Thankfully, this process is remarkably simple with the help of netcat, or even mysqldump itself:
> mysqlnetwork
# Replicate / populate a remote database in a single step (redirect data dump)
mysqldump --opt --compress --user=username database | mysql --user=username2 --password=p2 --host=hostB -D database -C database
# data backup with netcat
# backup recipient - listen on port 6000, write recieved data to backup.bz2
nc -l 6000 > backup.bz2
# backup initiator - dump the database, compress, and send to hostB:6000
mysqldump --opt -u user1 -t database | bzip2 -c | nc -w1 hostB 6000
A little overwhelming at first, but once you wrap your head around online vs. offline, and raw vs. dump, it's not all that bad. And let me tell you, once automated, you also tend to sleep far better at night!
Reference: http://www.igvita.com/2007/10/10/hands-on-mysql-backup-migration/