Steps to migrate a running machine using LVM on a single drive to mirrored drives on Linux RAID 1 (mirror) and LVM. Keep the machine online while data is migrated across the LVM too!
This document was written based on a How-to article for Debian Etch (see references for original article). This version was tested using CentOS 5.3. It may work for other versions of CentOS and Linux using LVM on a single drive.
In my example the main drive is /dev/sda where /dev/sda1 is a ext3 /boot partition and /dev/sda2 is a our LVM PV. Our new hard drive is /dev/sdb and is the same model and size as /dev/sda. LVM naming may be different and I have outlined steps to get PV, VG, and LV names of your specific system setup.
Remember to BACKUP YOUR DATA! I make no guarantees this will work and you should have all data backed up off of BOTH drives used in this tutorial to an external source that is disconnected from the machine when starting this tutorial.
Insert a second drive
This step will vary between systems. In short, you need to put the new drive into the machine that will be part of the RAID mirror. The drive should be identical in size as the current one and if possible, the same model.
Throughout this document the new drive will be referenced as /dev/sdb.
2.
Print current partition layout for reference
We will need the current partition layout of /dev/sda (our original drive with LVM partition). Our example has only 2 partitions. /dev/sda1 (the boot partition) and /dev/sda2 (the LVM partition).
Print the current layout with the following command
# fdisk -l /dev/sda
3.
Partition the second drive
We now need to partition the second drive identical to the first drive. Use the starting cylinder and last cylinder of the first drive partition layout to ensure the second drive is the same.
Partition the new drive with the following command:
# fdisk /dev/sdb
a. Press 'n' to create a new partition
b. Press 'p' for primary partition
c. Press '1' to create the first primary partition (this step might be automatically completed as there are no partitioins yet)
d. Press '1' to start it at the first cylinder
e. Type in the number of the last cylinder for the original /dev/sda1 partition, in our example it is '13'
f. Type in 't' to set the partition type
g. Type in the partition number
h. Type in 'fd' for Linux RAID type
i. Perform sub-steps a-h for the second primary partition
j. Type in 'w' to write the changes
4.
Compare the partition layout between the two drives
We should now have /dev/sdb partitioned with 2 Linux RAID partitions the same size as the /dev/sda partitions (which are likely not Linux RAID type, this is OK). Verify the partition sizes line up:
# fdisk -l
Our example system now outputs:
Disk /dev/sda: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 1044 8281507+ 8e Linux LVM
Disk /dev/sdb: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 13 104391 fd Linux raid autodetect
/dev/sdb2 14 1044 8281507+ fd Linux raid autodetect
5.
Create our new RAID mirror devices
Zero the superblock in case the new drive happened to be part of a linux software RAID before:
# mdadm --zero-superblock /dev/sdb1
# mdadm --zero-superblock /dev/sdb2
Create the new RAID devices:
# mdadm --create /dev/md0 --verbose --level=1 --raid-devices=2 missing /dev/sdb1
# mdadm --create /dev/md1 --verbose --level=1 --raid-devices=2 missing /dev/sdb2
We use the word 'missing' in place of the first drive as we will add the drive to the array after we confirm the machine can boot and all the data is on array (which will currently only have 1 drive, the newly added /dev/sdb).
6.
Build the /etc/mdadm.conf
There may already be a mdadm.conf in which case you may only need to add the arrays with the mdadm --examine line. In our example we did not have any /etc/mdadm.conf and needed to build one by hand.
The following commands will build a simple /etc/mdadm.conf that sends notifications to root (our example machine also has root aliased to an external email address).
# echo "DEVICE partitions" >> /etc/mdadm.conf
# echo "MAILADDR root" >> /etc/mdadm.conf
# mdadm --examine --scan >> /etc/mdadm.conf
7.
Format the new RAID boot partition
We can now format the RAID partition that will be used in place of our boot partition on /dev/sda1. In our example system the original partition is ext3 mounted as /boot.
Setup using command:
# mkfs.ext3 /dev/md0
8.
Mount and build the new boot partition
We need to copy over the existing boot partition to the new RAID device:
# mkdir /mnt/boot
# mount /dev/md0 /mnt/boot
# cp -dpRx /boot/* /mnt/boot/
# umount /mnt/boot
9.
Mount new boot partition in place of old
We will now unmount the current /boot partition and mount our new /dev/md0 device in place of it to prepare for making a new initrd image.
# umount /boot
# mount /dev/md0 /boot
10.
Build new initrd image
Now build a new initrd image that contains the dm-mirror and other raid modules to be safe (without these the boot will fail not recognizing the new /boot).
# mv /boot/initrd-$(uname -r).img /boot/initrd-$(uname -r).img.bak
# mkinitrd -v /boot/initrd-$(uname -r).img $(uname -r)
11.
Install grub on both drives
Install grub on both physical drives MBR and edit /etc/fstab to reflect our new boot partition /dev/md0
# grub-install /dev/sda
# grub-install /dev/sdb
If grub-install complains about not able to find /dev/sdb or that it is not part of the BIOS drives then you may need to use the --recheck switch
# grub-install --recheck /dev/sdb
# grub-install --recheck /dev/sda
12.
Edit /etc/fstab to reflect new /boot location
Edit the /etc/fstab file to replace LABEL=/boot with /dev/md0. Your new /boot partition line should look something like:
/dev/md0 /boot ext3 defaults 1 2
13.
Time for the first reboot!
Now, reboot the system to get it up and running on the new boot partition and off of /dev/sda1. This will confirm that we can safely change the /dev/sda1 partition ID to Linux RAID type and then add the partition to the md0 mirror.
# reboot
14.
Take a breather... Verify /boot is mounted using the RAID device
You are about half way done. Hopefully everything is up and running after your first reboot.
Check the mount command to verify that /boot is using the new /dev/md0 device:
# mount
If it is then you can are set to continue... after a short break of course.
15.
Change first drive /dev/sda1 to Linux RAID type
Modify the old /boot device (/dev/sda1) to be Linux RAID type. This will prepare it so it can be added to our RAID device /dev/md0 (which our new /boot is using).
# fdisk /dev/sda
a. Type in 't' to set the partition type
b. Type in the partition number (1)
c. Type in 'fd' for Linux RAID type
d. Type in 'w' to write the changes and exit fdisk
16.
Add /dev/sda1 to /dev/md0 mirror
Add the /dev/sda1 partition to the /dev/md0 mirror and remove the old label
# e2label /dev/sda1 ""
# mdadm --add /dev/md0 /dev/sda1
17.
Wait for the /dev/md0 mirror to rebuild
Watch the /proc/mdstat output to make sure the syncing of the newly completed /dev/md0 device finishes before continuing.
# watch cat /proc/mdstat
Once it is complete you may continue. Since the /boot partition is usually only a couple hundred megabytes this usually completes very quickly.
18.
Create new LVM physical volume
Create a new LVM PV on the second raid device. We will use this new PV to extend the current LVM volume group to.
# pvcreate -v /dev/md1
19.
Extend volume group to new PV
Use command `vgs` to show the name of the LVM VG. In our example it is VolGroup00.
Run the following command replacing VolGroup00 with the actual name of your volume group. You may have to do this step for each VG if you have multiple:
# vgextend -v VolGroup00 /dev/md1
20.
Move logical volume to new PV
Use `pvdisplay` to verify the VG is now using both drives:
# pvdisplay
Now we can move the LV to the new PV:
Use`lvs` to get the LV names (you will need to move all) then run the following command for both (replacing LogVolXX with your LV names):
# pvmove -v /dev/sda2 /dev/md1
Wait a while.... It will output % done every 15 seconds.
Depending on the size of your original drive this may take an hour or more. It is now copying everything from the first drive to the second drive while keeping the machine online.
21.
Remove the VG from the old PV
Once all the data is finished moving we want to 'reduce' the volume groups to only use the new physical volume and then remove the old physical volume so it cna no longer be used for LVM.
# vgreduce -v VolGroup00 /dev/sda2
# pvremove /dev/sda2
22.
Build new initrd image (again)
Built new initrd image again. This was needed on our example system to keep from throwing a kernel panic at boot. We are being safe by doing this step again.
# mv /boot/initrd-$(uname -r).img /boot/initrd-$(uname -r).img.bak
# mkinitrd -v /boot/initrd-$(uname -r).img $(uname -r)
23.
Install grub on both drives (again)
Double check that grub is installed correctly.
Somewhere between the Step 10 reboot and Step 24 reboot grub became corrupted (froze with only 'GRUB' on screen and no indicator) so I had to reinstall grub again on each drive to be safe. The fix was to install grub the traditional way using the grub commands root (hd0,1) and setup (hd0) and then the same for hd1.
# grub-install --recheck /dev/sdb
# grub-install --recheck /dev/sda
24.
Reboot again!
Reboot the machine again:
# reboot
Everything come back up OK? Great, we are almost done!
25.
Change first drive /dev/sda2 to Linux RAID type
Change the old LVM partiton to Linux RAID type:
# fdisk /dev/sda
a. t
b. 2
c. fd
d. w
Verify the partition type is now Linux RAID:
# fdisk -l
26.
Add /dev/sda2 to /dev/md1 mirror
We are now ready to add the old /dev/sda2 LVM partition to the /dev/md1 RAID device:
# mdadm --add /dev/md1 /dev/sda2
27.
Wait for the /dev/md1 mirror to rebuild
Watch the /proc/mdstat output to make sure the syncing of the newly completed /dev/md1 device finishes before rebooting.
# watch cat /proc/mdstat
28.
Last reboot!
Reboot the machine once more to verify that everything is working correctly:
# reboot
Is the machine back up and running? No data loss?
Wonderful! You have just completed migrating your single drive machine to a two drive Linux RAID mirror.
Conclusion
While it is always best to create RAID setups and the time of install there may be a desire to have RAID after the fact when the original installer was not required, did not have the hardware (only 1 drive available), or didn't have the know how.
In a pinch, this may help to bring a production system some drive redundancy. I hope you found this tutorial helpful!
Reference: http://community.spiceworks.com/how_to/show/340
Tuesday, September 29, 2009
LVM Single Drive to LVM RAID 1 Mirror MigrationLVM Single Drive to LVM RAID 1 Mirror Migration
Category:
LVM
— SkyHi @ Tuesday, September 29, 2009