Showing posts with label LVM. Show all posts
Showing posts with label LVM. Show all posts

Thursday, April 29, 2010

Mount LVM partitions from an external hard drive

SkyHi @ Thursday, April 29, 2010

Mount LVM partitions from an external hard drive

Q Is it possible, and if so how, to mount LVM partitions from an external hard drive? I'm thinking of my old Fedora system drive from which I would like to retrieve a single file without having to boot from it.

A As long as you have the LVM tools installed on the distro you are booting, you can mount LVM partitions from any disk (I even did it from a USB key once). Run

vgscan
vgchange -a y

as root and all the partitions should have devices created in the form /dev/volumegroup/logicalvolume, which you can then mount in the usual way:

mount /dev/volumegroup/logicalvolume /mnt/somewhere


REFERENCES
http://www.tuxradar.com/answers/296

Mounting a Linux LVM volume on External harddrive

SkyHi @ Thursday, April 29, 2010
You do not mount a partition of type "Linux LVM" the same way you mount
a partition using a standard Linux file system (e.g. ext2, ext3).





# fdisk -l /dev/hda





Disk /dev/hda: 160.0 GB, 160041885696 bytes


255 heads, 63 sectors/track, 19457 cylinders


Units = cylinders of 16065 * 512 = 8225280 bytes





   Device
Boot     
Start        
End      Blocks   Id  System



/dev/hda1  
*          
1         
13      104391   83  Linux



/dev/hda2             
14       19457  
156183930   8e  Linux LVM






# mount /dev/hda2 /tmp/mnt


mount: /dev/hda2 already mounted or /tmp/mnt busy





First, let's determine the volume group containing the physical volume /dev/hda2.





# pvs


 
PV        
VG         Fmt  Attr
PSize   PFree



  /dev/hda2  VolGroup01 lvm2 a-   148.94G 32.00M


  /dev/hdb2  VolGroup00 lvm2 a-   114.94G 96.00M





Next, let's list the logical volumes in VolGroup01.





# lvdisplay /dev/VolGroup01


  --- Logical volume ---


  LV Name                /dev/VolGroup01/LogVol00


  VG Name                VolGroup01


  LV
UUID               
zOQogm-G8I7-a4WC-T7KI-AhWe-Ex3Y-JVzFcR



  LV Write Access        read/write


  LV Status              available


  # open                 0


  LV Size                146.97 GB


  Current LE             4703


  Segments               1


  Allocation             inherit


  Read ahead sectors     0


  Block device           253:2


  


  --- Logical volume ---


  LV
Name               
/dev/VolGroup01/LogVol01



  VG Name                VolGroup01


  LV
UUID               
araUBI-4eer-uh5L-Dvnr-3bI6-4gYg-APgYy2



  LV Write Access        read/write


  LV Status              available


  # open                 0


  LV Size                1.94 GB


  Current LE             62


  Segments               1


  Allocation             inherit


  Read ahead sectors     0


  Block device           253:3





The logical volume I would like to "mount" (in purely the computing-related sense) is /dev/VolGroup01/LogVol00. The other logical volume is a swap partition.





# mount /dev/VolGroup01/LogVol00 /tmp/mnt

REFERENCES
http://www.brandonhutchinson.com/Mounting_a_Linux_LVM_volume.html


I've not much experience with LVM, but I did have a similar problem to
you, so here's how I solved it (Ubuntu 7.04 server).



1. Install lvm2:



sudo apt-get install lvm2

sudo cp -r /lib/lvm-200/ /lib/lvm-0



2. Take note of the LV Name for the volume you want to mount from the
output of the following command (/dev/VolGroup00/LogVol00 in my case):



sudo lvdisplay



3. Run the following commands to mount the logical volume:



sudo modprobe dm-mod

sudo vgchange -ay

sudo mkdir /mnt/old_hd

sudo mount /dev/VolGroup00/LogVol00 /mnt/old_hd



Worked for me!

REFERENCES
http://ubuntuforums.org/showthread.php?t=428292

Monday, February 1, 2010

Setup additional LVM in CentOS 5.2

SkyHi @ Monday, February 01, 2010

This post will cover a setup of an additional LVM in CentOS 5.2 running from XenServer Express Edition. I will not cover the installation of the LVM itself because I assume you have a copy of LVM installed. Lets begin.

1. Print out the partition of all hard disk using the following commands below: -

[root@ctos5264a ~]# fdisk -l

Disk /dev/xvda: 32.2 GB, 32212254720 bytes
255 heads, 63 sectors/track, 3916 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/xvda1 * 1 13 104391 83 Linux
/dev/xvda2 14 3916 31350847+ 8e Linux LVM

Disk /dev/xvdb: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/xvdb doesn't contain a valid partition table

Disk /dev/xvdc: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/xvdc doesn't contain a valid partition table

Disk /dev/xvde: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/xvde doesn't contain a valid partition table

2. Next, create one new partition for /dev/xvdb using Linux LVM filesystem as below: -

[root@ctos5264a ~]# fdisk /dev/xvdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel. Building a new DOS disklabel. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable.

The number of cylinders for this disk is set to 1305.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

2a. In fdisk, press “n” to create a new partition. Next, press “p” to select primary partition as your choice and following press “1″ for first partition. Then, press “1″ again to select the first cylinder of the partition and follow by typing “1305″ to select the last cylinder of the partition. The partition will be created in a second.

Command (m for help): n <-- hit ENTER
Command action
e extended
p primary partition (1-4)
p <-- ENTER
Partition number (1-4): 1 <-- ENTER
First cylinder (1-1305, default 1): 1 <-- ENTER
Last cylinder or +size or +sizeM or +sizeK (1-1305, default 1305): 1305 <-- ENTER

2b. You are still in fdisk. Press “p” to print the partition.

Command (m for help): p <-- ENTER

Disk /dev/xvdb: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/xvdb1 1 1305 10482381 83 Linux

2c. To change the partition filesystem, press “t” and follow by “L” to list out the codes.

Command (m for help): t <-- ENTER
Selected partition 1
Hex code (type L to list codes): L <-- ENTER

0 Empty 1e Hidden W95 FAT1 80 Old Minix be Solaris boot
1 FAT12 24 NEC DOS 81 Minix / old Lin bf Solaris
2 XENIX root 39 Plan 9 82 Linux swap / So c1 DRDOS/sec (FAT-
3 XENIX usr 3c PartitionMagic 83 Linux c4 DRDOS/sec (FAT-
4 FAT16 <32M 40 Venix 80286 84 OS/2 hidden C: c6 DRDOS/sec (FAT-
5 Extended 41 PPC PReP Boot 85 Linux extended c7 Syrinx
6 FAT16 42 SFS 86 NTFS volume set da Non-FS data
7 HPFS/NTFS 4d QNX4.x 87 NTFS volume set db CP/M / CTOS / .
8 AIX 4e QNX4.x 2nd part 88 Linux plaintext de Dell Utility
9 AIX bootable 4f QNX4.x 3rd part 8e Linux LVM df BootIt
a OS/2 Boot Manag 50 OnTrack DM 93 Amoeba e1 DOS access
b W95 FAT32 51 OnTrack DM6 Aux 94 Amoeba BBT e3 DOS R/O
c W95 FAT32 (LBA) 52 CP/M 9f BSD/OS e4 SpeedStor
e W95 FAT16 (LBA) 53 OnTrack DM6 Aux a0 IBM Thinkpad hi eb BeOS fs
f W95 Ext'd (LBA) 54 OnTrackDM6 a5 FreeBSD ee EFI GPT
10 OPUS 55 EZ-Drive a6 OpenBSD ef EFI (FAT-12/16/
11 Hidden FAT12 56 Golden Bow a7 NeXTSTEP f0 Linux/PA-RISC b
12 Compaq diagnost 5c Priam Edisk a8 Darwin UFS f1 SpeedStor
14 Hidden FAT16 <3 61 SpeedStor a9 NetBSD f4 SpeedStor
16 Hidden FAT16 63 GNU HURD or Sys ab Darwin boot f2 DOS secondary
17 Hidden HPFS/NTF 64 Novell Netware b7 BSDI fs fd Linux raid auto
18 AST SmartSleep 65 Novell Netware b8 BSDI swap fe LANstep
1b Hidden W95 FAT3 70 DiskSecure Mult bb Boot Wizard hid ff BBT
1c Hidden W95 FAT3 75 PC/IX

2d. Enter “8e” to select Linux LVM as the filesystem and lastly press “w” to save the partition.

Hex code (type L to list codes): 8e <-- ENTER
Changed system type of partition 1 to 8e (Linux LVM)

Command (m for help): w <-- ENTER
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

3. Print out the partition of /dev/xvdb using the following commands below: -

[root@ctos5264a ~]# fdisk -l /dev/xvdb

Disk /dev/xvdb: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/xvdb1 1 1305 10482381 8e Linux LVM

4. You can proceed to create one new partition for /dev/xvdc and /dev/xvde using the steps above.

5. After you have created the new partition, lets print out and verify using the command below: -

[root@ctos5264a ~]# fdisk -l

Disk /dev/xvda: 32.2 GB, 32212254720 bytes
255 heads, 63 sectors/track, 3916 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/xvda1 * 1 13 104391 83 Linux
/dev/xvda2 14 3916 31350847+ 8e Linux LVM

Disk /dev/xvdb: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/xvdb1 1 1305 10482381 8e Linux LVM

Disk /dev/xvdc: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/xvdc1 1 1305 10482381 8e Linux LVM

Disk /dev/xvde: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/xvde1 1 1305 10482381 8e Linux LVM

6. Now lets create the new partitions for LVM using the command below: -


[root@ctos5264a ~]# pvcreate /dev/xvdb1 /dev/xvdc1 /dev/xvde1
Physical volume "/dev/xvdb1" successfully created
Physical volume "/dev/xvdc1" successfully created
Physical volume "/dev/xvde1" successfully created

7. You can verify the above command by printing the current state of your physical volumes using the command below: -


[root@ctos5264a ~]# pvdisplay
--- Physical volume ---
PV Name /dev/xvda2
VG Name VolGroup00
PV Size 29.90 GB / not usable 24.06 MB
Allocatable yes (but full)
PE Size (KByte) 32768
Total PE 956
Free PE 0
Allocated PE 956
PV UUID ruUiO0-p9bU-HmZW-rmr6-3PTd-o0C3-4IhxBM

"/dev/xvdb1" is a new physical volume of "10.00 GB"
--- NEW Physical volume ---
PV Name /dev/xvdb1
VG Name
PV Size 10.00 GB
Allocatable NO
PE Size (KByte) 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID SKcT8y-ufAJ-q7mq-LveO-YktC-tAdj-I76V6Y

"/dev/xvdc1" is a new physical volume of "10.00 GB"
--- NEW Physical volume ---
PV Name /dev/xvdc1
VG Name
PV Size 10.00 GB
Allocatable NO
PE Size (KByte) 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID oS2omJ-KI9S-pKe0-DXfJ-5OSh-YU8b-CQGls3

"/dev/xvde1" is a new physical volume of "10.00 GB"
--- NEW Physical volume ---
PV Name /dev/xvde1
VG Name
PV Size 10.00 GB
Allocatable NO
PE Size (KByte) 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID VWAZFc-HUFz-65SZ-2oOE-AXrG-i0vD-U4Gj8K

8. Next, create the volume group “VolGroup01″ and add /dev/xvdb1 /dev/xvdc1 /dev/xvde1 to “VolGroup01″ as below: -


[root@ctos5264a ~]# vgcreate VolGroup01 /dev/xvdb1 /dev/xvdc1 /dev/xvde1
Volume group "VolGroup01" successfully created

9. You can verify the volume group created by running the command below: -

[root@ctos5264a ~]# vgdisplay

--- Volume group ---
VG Name VolGroup01
System ID
Format lvm2
Metadata Areas 3
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 3
Act PV 3
VG Size 29.99 GB
PE Size 4.00 MB
Total PE 7677
Alloc PE / Size 0 / 0
Free PE / Size 7677 / 29.99 GB
VG UUID OuaiF9-eSCk-CcQ4-3aXF-8FoD-zzUK-afIOWb

--- Volume group ---
VG Name VolGroup00
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 29.88 GB
PE Size 32.00 MB
Total PE 956
Alloc PE / Size 956 / 29.88 GB
Free PE / Size 0 / 0
VG UUID eNFBwl-uedf-oODB-YFDG-Np7X-Lrt1-949pdv

10. You can also run the command below to verify the volume group created: -


[root@ctos5264a ~]# vgscan
Reading all physical volumes. This may take a while...
Found volume group "VolGroup01" using metadata type lvm2
Found volume group "VolGroup00" using metadata type lvm2

11. Next, create the logical volumes “share1″ with “20GB” size in the volume group “VolGroup01″ using the command below: -


[root@ctos5264a ~]# lvcreate --name share1 --size 20G VolGroup01
Logical volume "share1" created

12. Now, print out the overview of the logical volumes using the command below: -


[root@ctos5264a ~]# lvdisplay
--- Logical volume ---
LV Name /dev/VolGroup01/share1
VG Name VolGroup01
LV UUID MehBKd-sCqR-1jrl-yriX-QDU3-BP4I-qVc9LX
LV Write Access read/write
LV Status available
# open 0
LV Size 20.00 GB
Current LE 5120
Segments 3
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2

--- Logical volume ---
LV Name /dev/VolGroup00/LogVol00
VG Name VolGroup00
LV UUID DzwZu2-wmoH-2Bmu-eOoK-IlYZ-9OlC-R5TWGb
LV Write Access read/write
LV Status available
# open 1
LV Size 28.84 GB
Current LE 923
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0

--- Logical volume ---
LV Name /dev/VolGroup00/LogVol01
VG Name VolGroup00
LV UUID Zr53bA-djGf-Qdda-VsrX-7ewX-1RyY-6CapPj
LV Write Access read/write
LV Status available
# open 1
LV Size 1.03 GB
Current LE 33
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1

13. You can also verify the logical volumes using the command below: -


[root@ctos5264a ~]# lvscan
ACTIVE '/dev/VolGroup01/share1' [20.00 GB] inherit
ACTIVE '/dev/VolGroup00/LogVol00' [28.84 GB] inherit
ACTIVE '/dev/VolGroup00/LogVol01' [1.03 GB] inherit

14. Next, format the logical volume with ext3 filesystem using the command below: -


[root@ctos5264a ~]# mkfs.ext3 /dev/VolGroup01/share1
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
2621440 inodes, 5242880 blocks
262144 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
160 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000


Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done


This filesystem will be automatically checked every 25 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.

15. Create a mount directory using the command below: -

[root@ctos5264a ~]# mkdir /mnt/share1

16. Finally, lets mount the logical volume using the command below: -

[root@ctos5264a ~]# mount /dev/VolGroup01/share1 /mnt/share1


REFERENCE

http://wingloon.com/2009/01/16/setup-additional-lvm-in-centos-52/comment-page-1/



New HDD, enlarging Red Hat/Centos ext3/lvm partition

SkyHi @ Monday, February 01, 2010

Suddenly, I’ve run out of space on one of my servers at home. Solution, add a new harddisk, extend existing partition onto the new harddisk.. Simple right? Right…

Firstly, fix the new harddisk onto the machine. Fdisk it like thus -

# fdisk /dev/sdb

Create a new ’sdb1′ partition using type 8, which is Linux LVM.

Next, we need to create a Physical Volume within the newly created sdb1 partition.

# pvcreate /dev/sdb1

After that we will extend the existing volume ‘VolGroup00′ onto the newly created physical volume.

# vgextend VolGroup00 /dev/sdb1

Once done, the next step is to extend the Logical Volume within the volume group to use the free space newly made available when you extend the volume group previously.

# lvextend -L 40G /dev/VolGroup00/LogVol00

And finally, we’ll enlarge the ext3 partition to make use of the newly available free space in the logical volume.

# ext2online /dev/VolGroup00/LogVol00

I hope you guys read this below before proceeding..

# man ext2online
WARNING
Note that resizing a mounted filesystem is inherently dangerous and may corrupt filesystems, although no errors resulting in data loss have ever been reported to the author. In theory online resizing should work fine with arbitrarily large filesystems, but it has not yet been tested by the author on a filesystem larger than 11GB. Use with caution. Backups are always a good idea, because your disk may fail at any time, you delete files by accident, or your computer is struck by a meteor.

It is a good idea to ‘e2fsck -f /dev/VolGroup00/LogVol00′ before doing anything.


REFERENCE

http://www.maulvi.net/2007/12/16/new-hdd-enlarging-red-hatcentos-ext3lvm-partition/



Adding a physical disk to LVM in Redhat/CentOS

SkyHi @ Monday, February 01, 2010

Posted here for googlers and for my own future reference. Documentation pulled together from about 4 different sites. Could possibly be sub-titled: “Holy crap, the disk in my VMware installation is too small – it’s split up into 2GB files and using vmware to resize it seems like voodoo”

Problem:

My computer only has 20GB of disk space. I just have 1 partition. I want to add another disk (40GB). I don’t want to add another partition (and I really don’t want to reinstall the whole system), I want to increase the size of the root partition to 60GB. i.e. I want the root partition to span across two physical disks.

Solution:

  1. Add new physical disk. Boot.
  2. # pvscan

    This will show you the current physical volumes.

  3. # fdisk /dev/sdb

    Add the disk to your machine as a primary partition. Partition type: “8e (LVM)”. Obviously /dev/sdb may be different on your system.

  4. # pvcreate /dev/sdb1

    This creates a new physical LVM volume on our new disk.

  5. # vgextend VolGroup00 /dev/sdb1

    Add our new physical volume to the volume group: VolGroup00. Again, this group name may by different for you, but this is what Redhat & CentOS assigns by default when you install your system.

  6. # pvscan

    You should see the new physical volume assigned to VolGroup00.

  7. # lvextend -L+40G /dev/VolGroup00/LogVol00

    This increases the size of the logical volume our root partition resides in. Change the -L flag as appropriate.

We’ve just added 40GB to the logical volume used by the root partition. Sweet as. Now we need to resize the file system to utilize the additional space.

  1. Reboot into rescue mode using your CentOS CDROM.

    From memory this involves typing linux rescue as your boot option.

  2. When prompted, skip the mounting of system partitions.
  3. # lvm vgchange -a y

    This command makes your LVM volumes accessible.

  4. # e2fsck -f /dev/VolGroup00/LogVol00

    Run a file system check, the -f flag seems necessary. No idea what we do if the returns an error?

  5. # resize2fs /dev/VolGroup00/LogVol00

    Without any parameters resize2fs will just increase the file system to the max space available.

Reboot and your root partition is now 40GB lager, spanning multiple disks. Yay.

REFERENCE
http://lucaschan.com/weblog/2007/06/29/adding-a-physical-disk-to-lvm-in-redhatcentos/

Tuesday, September 29, 2009

LVM Single Drive to LVM RAID 1 Mirror MigrationLVM Single Drive to LVM RAID 1 Mirror Migration

SkyHi @ Tuesday, September 29, 2009
Steps to migrate a running machine using LVM on a single drive to mirrored drives on Linux RAID 1 (mirror) and LVM. Keep the machine online while data is migrated across the LVM too!

This document was written based on a How-to article for Debian Etch (see references for original article). This version was tested using CentOS 5.3. It may work for other versions of CentOS and Linux using LVM on a single drive.

In my example the main drive is /dev/sda where /dev/sda1 is a ext3 /boot partition and /dev/sda2 is a our LVM PV. Our new hard drive is /dev/sdb and is the same model and size as /dev/sda. LVM naming may be different and I have outlined steps to get PV, VG, and LV names of your specific system setup.

Remember to BACKUP YOUR DATA! I make no guarantees this will work and you should have all data backed up off of BOTH drives used in this tutorial to an external source that is disconnected from the machine when starting this tutorial.




Insert a second drive

This step will vary between systems. In short, you need to put the new drive into the machine that will be part of the RAID mirror. The drive should be identical in size as the current one and if possible, the same model.

Throughout this document the new drive will be referenced as /dev/sdb.

2.
Print current partition layout for reference

We will need the current partition layout of /dev/sda (our original drive with LVM partition). Our example has only 2 partitions. /dev/sda1 (the boot partition) and /dev/sda2 (the LVM partition).

Print the current layout with the following command
# fdisk -l /dev/sda

3.
Partition the second drive

We now need to partition the second drive identical to the first drive. Use the starting cylinder and last cylinder of the first drive partition layout to ensure the second drive is the same.

Partition the new drive with the following command:
# fdisk /dev/sdb

a. Press 'n' to create a new partition
b. Press 'p' for primary partition
c. Press '1' to create the first primary partition (this step might be automatically completed as there are no partitioins yet)
d. Press '1' to start it at the first cylinder
e. Type in the number of the last cylinder for the original /dev/sda1 partition, in our example it is '13'
f. Type in 't' to set the partition type
g. Type in the partition number
h. Type in 'fd' for Linux RAID type
i. Perform sub-steps a-h for the second primary partition
j. Type in 'w' to write the changes

4.
Compare the partition layout between the two drives

We should now have /dev/sdb partitioned with 2 Linux RAID partitions the same size as the /dev/sda partitions (which are likely not Linux RAID type, this is OK). Verify the partition sizes line up:

# fdisk -l

Our example system now outputs:

Disk /dev/sda: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 1044 8281507+ 8e Linux LVM

Disk /dev/sdb: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 1 13 104391 fd Linux raid autodetect
/dev/sdb2 14 1044 8281507+ fd Linux raid autodetect

5.
Create our new RAID mirror devices

Zero the superblock in case the new drive happened to be part of a linux software RAID before:

# mdadm --zero-superblock /dev/sdb1
# mdadm --zero-superblock /dev/sdb2

Create the new RAID devices:

# mdadm --create /dev/md0 --verbose --level=1 --raid-devices=2 missing /dev/sdb1
# mdadm --create /dev/md1 --verbose --level=1 --raid-devices=2 missing /dev/sdb2

We use the word 'missing' in place of the first drive as we will add the drive to the array after we confirm the machine can boot and all the data is on array (which will currently only have 1 drive, the newly added /dev/sdb).

6.
Build the /etc/mdadm.conf

There may already be a mdadm.conf in which case you may only need to add the arrays with the mdadm --examine line. In our example we did not have any /etc/mdadm.conf and needed to build one by hand.

The following commands will build a simple /etc/mdadm.conf that sends notifications to root (our example machine also has root aliased to an external email address).

# echo "DEVICE partitions" >> /etc/mdadm.conf
# echo "MAILADDR root" >> /etc/mdadm.conf
# mdadm --examine --scan >> /etc/mdadm.conf

7.
Format the new RAID boot partition

We can now format the RAID partition that will be used in place of our boot partition on /dev/sda1. In our example system the original partition is ext3 mounted as /boot.

Setup using command:
# mkfs.ext3 /dev/md0

8.
Mount and build the new boot partition

We need to copy over the existing boot partition to the new RAID device:

# mkdir /mnt/boot
# mount /dev/md0 /mnt/boot
# cp -dpRx /boot/* /mnt/boot/
# umount /mnt/boot

9.
Mount new boot partition in place of old

We will now unmount the current /boot partition and mount our new /dev/md0 device in place of it to prepare for making a new initrd image.

# umount /boot
# mount /dev/md0 /boot

10.
Build new initrd image

Now build a new initrd image that contains the dm-mirror and other raid modules to be safe (without these the boot will fail not recognizing the new /boot).

# mv /boot/initrd-$(uname -r).img /boot/initrd-$(uname -r).img.bak
# mkinitrd -v /boot/initrd-$(uname -r).img $(uname -r)

11.
Install grub on both drives

Install grub on both physical drives MBR and edit /etc/fstab to reflect our new boot partition /dev/md0

# grub-install /dev/sda
# grub-install /dev/sdb

If grub-install complains about not able to find /dev/sdb or that it is not part of the BIOS drives then you may need to use the --recheck switch

# grub-install --recheck /dev/sdb
# grub-install --recheck /dev/sda

12.
Edit /etc/fstab to reflect new /boot location

Edit the /etc/fstab file to replace LABEL=/boot with /dev/md0. Your new /boot partition line should look something like:

/dev/md0 /boot ext3 defaults 1 2

13.
Time for the first reboot!

Now, reboot the system to get it up and running on the new boot partition and off of /dev/sda1. This will confirm that we can safely change the /dev/sda1 partition ID to Linux RAID type and then add the partition to the md0 mirror.

# reboot

14.
Take a breather... Verify /boot is mounted using the RAID device

You are about half way done. Hopefully everything is up and running after your first reboot.

Check the mount command to verify that /boot is using the new /dev/md0 device:

# mount

If it is then you can are set to continue... after a short break of course.

15.
Change first drive /dev/sda1 to Linux RAID type

Modify the old /boot device (/dev/sda1) to be Linux RAID type. This will prepare it so it can be added to our RAID device /dev/md0 (which our new /boot is using).

# fdisk /dev/sda

a. Type in 't' to set the partition type
b. Type in the partition number (1)
c. Type in 'fd' for Linux RAID type
d. Type in 'w' to write the changes and exit fdisk

16.
Add /dev/sda1 to /dev/md0 mirror

Add the /dev/sda1 partition to the /dev/md0 mirror and remove the old label

# e2label /dev/sda1 ""
# mdadm --add /dev/md0 /dev/sda1

17.
Wait for the /dev/md0 mirror to rebuild

Watch the /proc/mdstat output to make sure the syncing of the newly completed /dev/md0 device finishes before continuing.

# watch cat /proc/mdstat

Once it is complete you may continue. Since the /boot partition is usually only a couple hundred megabytes this usually completes very quickly.

18.
Create new LVM physical volume

Create a new LVM PV on the second raid device. We will use this new PV to extend the current LVM volume group to.

# pvcreate -v /dev/md1

19.
Extend volume group to new PV

Use command `vgs` to show the name of the LVM VG. In our example it is VolGroup00.

Run the following command replacing VolGroup00 with the actual name of your volume group. You may have to do this step for each VG if you have multiple:

# vgextend -v VolGroup00 /dev/md1

20.
Move logical volume to new PV

Use `pvdisplay` to verify the VG is now using both drives:

# pvdisplay

Now we can move the LV to the new PV:

Use`lvs` to get the LV names (you will need to move all) then run the following command for both (replacing LogVolXX with your LV names):

# pvmove -v /dev/sda2 /dev/md1

Wait a while.... It will output % done every 15 seconds.

Depending on the size of your original drive this may take an hour or more. It is now copying everything from the first drive to the second drive while keeping the machine online.

21.
Remove the VG from the old PV

Once all the data is finished moving we want to 'reduce' the volume groups to only use the new physical volume and then remove the old physical volume so it cna no longer be used for LVM.

# vgreduce -v VolGroup00 /dev/sda2
# pvremove /dev/sda2

22.
Build new initrd image (again)

Built new initrd image again. This was needed on our example system to keep from throwing a kernel panic at boot. We are being safe by doing this step again.

# mv /boot/initrd-$(uname -r).img /boot/initrd-$(uname -r).img.bak
# mkinitrd -v /boot/initrd-$(uname -r).img $(uname -r)

23.
Install grub on both drives (again)

Double check that grub is installed correctly.

Somewhere between the Step 10 reboot and Step 24 reboot grub became corrupted (froze with only 'GRUB' on screen and no indicator) so I had to reinstall grub again on each drive to be safe. The fix was to install grub the traditional way using the grub commands root (hd0,1) and setup (hd0) and then the same for hd1.

# grub-install --recheck /dev/sdb
# grub-install --recheck /dev/sda

24.
Reboot again!

Reboot the machine again:

# reboot

Everything come back up OK? Great, we are almost done!

25.
Change first drive /dev/sda2 to Linux RAID type

Change the old LVM partiton to Linux RAID type:

# fdisk /dev/sda

a. t
b. 2
c. fd
d. w

Verify the partition type is now Linux RAID:

# fdisk -l

26.
Add /dev/sda2 to /dev/md1 mirror

We are now ready to add the old /dev/sda2 LVM partition to the /dev/md1 RAID device:

# mdadm --add /dev/md1 /dev/sda2

27.
Wait for the /dev/md1 mirror to rebuild

Watch the /proc/mdstat output to make sure the syncing of the newly completed /dev/md1 device finishes before rebooting.

# watch cat /proc/mdstat

28.
Last reboot!

Reboot the machine once more to verify that everything is working correctly:

# reboot

Is the machine back up and running? No data loss?

Wonderful! You have just completed migrating your single drive machine to a two drive Linux RAID mirror.

Conclusion

While it is always best to create RAID setups and the time of install there may be a desire to have RAID after the fact when the original installer was not required, did not have the hardware (only 1 drive available), or didn't have the know how.

In a pinch, this may help to bring a production system some drive redundancy. I hope you found this tutorial helpful!


Reference: http://community.spiceworks.com/how_to/show/340