Friday, January 22, 2010

How to configure MySQL High Availability with DRBD and Heartbeat on Centos 5.3

SkyHi @ Friday, January 22, 2010

In this tutorial, we’re going to go through the entire process of installing, configuring and testing DRBD, Heartbeat and MySQL running in a 2 node cluster environment. This will be a general configuration for learning.
Best practices and security will take a back seat while we learn how all the different pieces work together.
Let’s get started.

Requirements:
In this setup we need two server. To continue farther, you should be very comfortable with installing and configuring MySQL and other software packe before starting this step by step process.

I am using CentOs 5.3 distribution for this tutorial. you will need two server, i am using vmware server for this tutorial as it is the best for this type setup testing.

Server 1: 
Host Name: CentOS5a
IP Address eth0: 192.168.4.220/24
eth1:192.168.1.1/24

Server 2: 
Host Name: CentOS5b
IP Address eth0: 192.168.4.222/24
eth1:192.168.1.2/24  

Step-1: Installation

[root@node0 ~]# yum install drbd
[root@node0 ~]# rpm install kmod-drbd
[root@node0 ~]# yum install heartbeat
[root@node0 ~]# yum install mysql
[root@node0 ~]# yum install mysql-devel

This will install drbr 8.2, heartbeat 2.1.3, drbd kernel module 8.0.13, mysql 5.0.45, which will work just fine for learning DRBD and Heartbeat. Now that we have all our software installed, Now need to start the configuration.

Step-2: Configuring DRBD
We’re ready to start configuring DRBD on our two node system. Centos5a will become our primary node, making Centos5b our secondary node. These are our steps for configuring DRBD:
1. Create partitions on both nodes.
2. Create drbd.conf
o Configure global drbd options.
3. Configure resource which consists of:
o Disk partitions on node0 and node1.
o Network connection between nodes.
o Error handling
o Synchronization

Step-3: On each node, use fdisk to create a type 83 linux partition:

#fdisk /dev/sdb

Step-4: Editing the configuration file.
Now edit the drbd.conf file. Location of this file is /etc/drbd.conf, make sure you backup first.

#vi drbd.conf

#Here is a sample but working configuration file.
global {
minor-count 1;
}
resource mysql {
protocol C; # There are A, B and C protocols. Stick with C.
# incon-degr-cmd “echo ‘DRBD Degraded!’ | wall; sleep 60 ; halt -f”;
# If a cluster starts up in degraded mode, it will echo a message to all
# users. It’ll wait 60 seconds then halt the system.
on centos5a {
device /dev/drbd0; # The name of our drbd device.
disk /dev/sdb1;    # Partition we wish drbd to use.
address 192.168.4.220:7788; # Centos5a IP address and port number.
meta-disk internal; # Stores meta-data in lower portion of sdb1.
}
on centos5b {
device /dev/drbd0; # Our drbd device, must match node0.
disk /dev/sdb1;    # Partition drbd should use.
address 192.168.4.222:7788; # IP address of Centos5b, and port number.
meta-disk internal; #Stores meta-data in lower portion of sdb1.
}

disk {
on-io-error detach; # What to do when the lower level device errors.
}

net {
max-buffers 2048; #datablock buffers used before writing to disk.
ko-count 4; # Peer is dead if this count is exceeded.
#on-disconnect reconnect; # Peer disconnected, try to reconnect.
}

syncer {
rate 10M; # Synchronization rate, in megebytes. Good for 100Mb network.
#group 1;  # Used for grouping resources, parallel sync.
al-extents 257; # Must be prime, number of active sets.
}

startup {
wfc-timeout 0; # drbd init script will wait infinitely on resources.
degr-wfc-timeout 120; # 2 minutes.
}
}
 # End of resource mysql
Step-3: Bringing up DRBD

All software, drbd.conf, and devices have been created, make sure only Centos5a is running. Login as root, then issue the following command:
[root@node0 ~]# drbdadm create-md mysql

After that reboot Centos5a server and login to it as root. Issue the following command:
#cat /proc/drbd
Output will be something like this.

image1

Note that centos5a  is in a secondary state,  we will fix this by promoting it to the primary.

Step-: Configuring the second server.

Now Start up second node that means Centos5b then you’ll have to issue the following command:

[root@centos5b ~]# drbdadm create-md mysql

Complete the process and issue another command

#cat /proc/drbd

Output will be something like this

image2

You can see that both server is in seconday state. now we will promote the centos5a to primary.

Step-: Promoting first node to Primary

Login to first node that means Centos5a as root and issue the following command.

[root@Centos5a ~]# drbdadm — –overwrite-data-of-peer primary mysql

Now verify that it really promoted the first node to primary by running the following command again

#cat /proc/drbd

Output will be something like this

image3

You’ve now created a two node cluster. It’s very basic, failover is not automatic. We need to take care of that with Heartbeat. First, we need to test DRBD.

Step-:Testing DRBD

To have a working system, we need to create a filesystem on Centos5a. We do that just like normal, the difference is we use /dev/drbd0 device instead of /dev/sdb1:

[root@Centos5a ~]# mkfs.ext3 -L mysql /dev/drbd0
 
[root@Centos5b ~]# mkfs.ext3 /dev/drbd0

mke2fs 1.35 (28-Feb-2004)

mkfs.ext3: Wrong medium type while trying to determine filesystem size

You’re on Centos5b, which is secondary and /dev/drbd0 is read only! Switch to Centos5a.
Once that’s done, we’ll do some simple tests. On Centos5a, mount /dev/drbd0 on /mnt/. Change to that directory, then touch a few test files, create a directory. In order to check to see if our files have been replicated, we need to unmount /mnt/mysql, make Centos5a secondary, promote Centos5b to primary, remount /mnt/mysql then check to see if your files are on Centos5b. These steps are:

[root@Centos5a ~]# umount /mnt/mysql

[root@Centos5a ~]# drbdadm secondary mysql

Switch to Centos5b, then:
[root@Centos5b ~]# drbdadm primary mysql

[root@Centos5b ~]# mount /dev/drbd0 /mnt/mysql
Check /mnt/ and see what’s in there. You should see your files and directories you created on Centos5a! You’ll probably notice we didn’t make a filesystem on Centos5b for /dev/drbd0. That’s because /dev/drbd0 is replicated, so when we created the filesystem on Centos5a, it was also created on Centos5b. Matter of fact, anything we do in Centos5a:/dev/drbd0 will automatically get replicated to Centos5b:/dev/drdb0.

 

Next, we’ll configure MySQL to use our DRBD device. We’ll practice manually failing MySQL over between nodes before automating it with Heartbeat. You want to make sure you understand how the entire system works before automation. That way, if there was a problem with our test files not showing up on Centos5b, then we know there’s a problem with DRBD. If we tried to test the entire system as one large piece, it would be much more difficult to figure out which piece of the puzzle was giving us our problem. For practice, return Centos5a to primary node, and double check your files.


Reference: http://almamunbd.wordpress.com/2009/05/28/how-to-configure-mysql-high-availability-with-drbd-and-heartbeat/