Last week we explained various options for clustering and providing HA-MySQL. In the failover category, we mentioned DRBD as the premiere way with which to accomplish a rock-solid redundant MySQL setup. This is how you can implement DRBD with Heartbeat for MySQL.
Before we start, let’s quickly cover the architectural configuration of your two servers, and talk about performance. Firstly, you are going to use two “commodity” servers, which is not to imply slow or generic. You want to dedicate your fastest servers, because this type of configuration comes at a cost; it is doing much more than you may think a primary/secondary failover setup would. Therefore, I/O capabilities are probably the most important aspect. As Florian Haas of LINBIT (the creators of DRBD) points out, you can run two separate instances at once to avoid under-utilizing your secondary server. Each node will be the primary for its main instance, and a secondary for the other node.
Second, conceptually, you will configure MySQL to live on a DRBD replicated device. Heartbeat will monitor MySQL availability, and in the event a failover is necessary, the secondary server will mount the file system, steal the virtual IP, and start up MySQL.
Finally, performance: Do not take shortcuts. Yes, you need to ensure DRBD has a dedicated network interface to use. Also, spend the time optimizing as many aspects of your I/O subsystem as possible. With DRBD, every little bit helps, and cutting corners in the initial setup phase often means that you have to live with your choices (or schedule a downtime).
A deployment my team recently completed used two Dell M600 blades with dual quad-core Xeons, 16GB of RAM, dual 146GB SAS drives and—of course—dual GigE network ports. It is fast, but if great care is not taken this type of configuration can quickly slow down even this hardware.
Second, we get to configure DRBD. The sample configuration below is the basic set needed to get it working. You will probably want to adjust the sync rate to allow DRBD to use more bandwidth, as well as various timeout settings and buffer tweakables. Maybe we will write a followup article about DRBD tuning, but our scope at this point is to get it working.
Run this command on both the primary and secondary nodes, and observe
To actually enable replication, you must promote one server to Primary status with
You can watch
Finally, create the file system that we will mount. Using
The haresources file for this configuration needs:
Note: we did not mention configuring the IP address at all. Do not be tempted to put it in the normal places, because we want heartbeat alone to manage bringing up and down the network interface. The same applies to the file system: do not put it in
The quickest (and safest) way to test a failover is to simply stop the heartbeat service on the primary node.
There are many other ways to configure DRBD, including a Primary/Primary setup if you wish to run GFS and mount the file system on two nodes at the same time. This configuration, however, gets you an extremely robust MySQL setup that is not dependent on any single piece of hardware.
Reference
http://www.enterprisenetworkingplanet.com/nethub/article.php/3810596
Before we start, let’s quickly cover the architectural configuration of your two servers, and talk about performance. Firstly, you are going to use two “commodity” servers, which is not to imply slow or generic. You want to dedicate your fastest servers, because this type of configuration comes at a cost; it is doing much more than you may think a primary/secondary failover setup would. Therefore, I/O capabilities are probably the most important aspect. As Florian Haas of LINBIT (the creators of DRBD) points out, you can run two separate instances at once to avoid under-utilizing your secondary server. Each node will be the primary for its main instance, and a secondary for the other node.
Second, conceptually, you will configure MySQL to live on a DRBD replicated device. Heartbeat will monitor MySQL availability, and in the event a failover is necessary, the secondary server will mount the file system, steal the virtual IP, and start up MySQL.
Finally, performance: Do not take shortcuts. Yes, you need to ensure DRBD has a dedicated network interface to use. Also, spend the time optimizing as many aspects of your I/O subsystem as possible. With DRBD, every little bit helps, and cutting corners in the initial setup phase often means that you have to live with your choices (or schedule a downtime).
A deployment my team recently completed used two Dell M600 blades with dual quad-core Xeons, 16GB of RAM, dual 146GB SAS drives and—of course—dual GigE network ports. It is fast, but if great care is not taken this type of configuration can quickly slow down even this hardware.
Down to Business
The steps we must take are:- Create partitions, configure DRBD replication, and create a file system
- Make MySQL use the DRBD volume for its DB store location
- Configure Heartbeat to monitor MySQL, an IP, and the DRBD volume
Step 1: Configure DRBD
First we need to create a partition. You can do this with LVM to allow future resizing of the DRBD volume, but know that it cannot be done live. This gets a little confusing at times, so here is the summary: you will create a partition and give it to DRBD, which will create its own device. Then, you will create a filesystem on top of the/dev/drbd0
device. So create the first one that we will give to DRBD, ours ended up being: /dev/vg00/drbd0
. You must repeat the same steps on the secondary node as well.Second, we get to configure DRBD. The sample configuration below is the basic set needed to get it working. You will probably want to adjust the sync rate to allow DRBD to use more bandwidth, as well as various timeout settings and buffer tweakables. Maybe we will write a followup article about DRBD tuning, but our scope at this point is to get it working.
<code>global { usage-count yes; } common { protocol C; disk { on-io-error detach; } syncer { rate 10M; } } resource mysql { startup { wfc-timeout 0; degr-wfc-timeout 120; } on host1.fqdn { device /dev/drbd0; disk /dev/vg00/drbd0; address 1.1.1.1:8000; meta-disk internal; } on host2.fqdn { device /dev/drbd0; disk /dev/vg00/drbd0; address 1.1.1.2:8000; meta-disk internal; } } </code>You can now tell DRBD to create its device with
drbdadm create-md mysql
.Run this command on both the primary and secondary nodes, and observe
/proc/drbd
. Both servers should report the state as Secondary/Secondary at this time.To actually enable replication, you must promote one server to Primary status with
drbdadm -- --overwrite-data-of-peer primary mysql
.You can watch
/proc/drbd
, and when all data is sync’d up, the output will look like this on the Primary node:0: cs:Connected st:Primary/Secondary ds:UpToDate/UpToDate C r---
ns:319107812 nr:2680 dw:319109368 dr:6489590 al:3631542
bm:122 lo:0 pe:0 ua:0 ap:0 oos:0
Congratulations, you now have a replicated volume!Finally, create the file system that we will mount. Using
/dev/drbd0
, go ahead and create an EXT3 file system. Only do this on the primary node, as the changes will start replicating to the other server.Step 2: Give it to MySQL
MySQL needs to store its data on this DRBD-replicated volume, so ensure it is mounted now. If you wish to use/disk/mysql
as the mount point, for example, you would edit my.cnf
thusly:datadir=/disk/mysql
You will want to stop MySQL and rsync the contents of its existing datadir to the new location. After making sure DRBD is caught up (/proc/drbd
reports UpToDate/UpToDate), restart MySQL and ensure everything is happy.Step 3: Heartbeat Failover
Heartbeat can be amazingly complex, or amazingly simple. To just get it working this configuration is fairly simple, but do know that you will want to spend some time with the documentation if this is your first Linux-HA experience.The haresources file for this configuration needs:
node1.fqdn 1.1.1.3 drbddisk::mysql
Filesystem::/dev/drbd0::/disk/mysql::ext3 mysqld
This line lists the primary node, virtual IP to use, and other resources that are managed. Order matters, so ensure you list the IP before mysqld (it’s needed to start MySQL), and the file system before mysqld for the same reason. Once this is done, you can start the heartbeat service and everything should be working. Check the logs on both machines to ensure sanity.Note: we did not mention configuring the IP address at all. Do not be tempted to put it in the normal places, because we want heartbeat alone to manage bringing up and down the network interface. The same applies to the file system: do not put it in
/etc/fstab
.The quickest (and safest) way to test a failover is to simply stop the heartbeat service on the primary node.
There are many other ways to configure DRBD, including a Primary/Primary setup if you wish to run GFS and mount the file system on two nodes at the same time. This configuration, however, gets you an extremely robust MySQL setup that is not dependent on any single piece of hardware.
Reference
http://www.enterprisenetworkingplanet.com/nethub/article.php/3810596