There comes a time where you set-up a VM and don’t set the hard-disk size to be big enough for what you are going to do. I hit that problem just the other day and I needed to increase the size of my home dir on a Ubuntu Lucid LTS vm before I ran out of space.
The method I used is just to add a second disk to the vm and mount that to free-up space. In my case I decide to create a new hard-disk and mount that at /home.
Note: be very careful doing the following – it goes without saying following these instructions are at your own risk.
Adding the hard-disk to the virtual machine
Add the disk to your VM through the route specific to your Virtualisation software in my case I’m using VirtualBox, but there will be a similar method for VMware, Parallels etc. Note, you’ll need to stop your VM to do this.
Here’s a screenshot of the dialogs in VirtualBox (click on it to see a bigger version):
Here are the steps in text:
- Open the Virtual Media Manger using Ctrl-D or File => Virtual Media Manager
- Under the hard disks tab click “New”
- Follow the wizard
- Go back to the VM settings, select storage and select “Add Hard Disk” (To the far right of IDE Controller)
- The first disk in the list is selected. Use the drop down on the right to select the new disk you created
Partitioning and formatting the new drive
All of the following commands are carried out as root
In my case I’ve run out of space in my home directory so I’m going to move my home partition to the new drive. To do that as root I’ve renamed /home to /home.bck
e.g:
mv /home{,.bck}
if you’re wondering what the curly braces are this is just shorthand for saying mv /home /home.bck
the shell expands that syntax into the full version when running it. If this is confusing then just run:
mv /home /home.bck
Once rebooted you should see you have an unpartitioned hard-drive ready to go. To check this run from the terminal:
# fdisk -l
Disk /dev/sda: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical / optimal IO): 512 bytes / 512 bytes
Disk identifier: 0x000443b0
Device Boot Start End Blocks Id System
/dev/sda1 * 1 993 7976241 83 Linux
/dev/sda2 994 1044 409657+ 5 Extended
/dev/sda5 994 1044 409626 82 Linux swap / Solaris
Disk /dev/sdb: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical / optimal IO): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/sdb doesn't contain a valid partition table
To fix this we need to first partition the new drive.
# fdisk
Here’s the options we need to partition the new disk:
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-2610, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-2610, default 2610):
Using default value 2610
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
Now let’s see what we’ve got:
# fdisk -l
Disk /dev/sda: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical / optimal IO): 512 bytes / 512 bytes
Disk identifier: 0x000443b0
Device Boot Start End Blocks Id System
/dev/sda1 * 1 993 7976241 83 Linux
/dev/sda2 994 1044 409657+ 5 Extended
/dev/sda5 994 1044 409626 82 Linux swap / Solaris
Disk /dev/sdb: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical / optimal IO): 512 bytes / 512 bytes
Disk identifier: 0x8df11d60
Device Boot Start End Blocks Id System
/dev/sdb1 1 2610 20964793+ 83 Linux
Good so we now have a partition at /dev/sb1 that we can now format with our chosen filesystem.
To do this run:
# mkfs -t ext4 /dev/sdb1
mke2fs 1.41.10 (10-Feb-2009)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
1310720 inodes, 5241198 blocks
262059 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=0
160 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 24 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
Great, now all we need to do is mount our fresh filesystem. In my case I’m going to mount it at /home
as I’ve already moved the original /home directory out of the way.
mount /dev/sdb1 /home
Great now we can move all of our existing data across to the new home directory.
cp -a /home.bck/* /home
The -a
flag is the archive flags. This will preserve timestamps ownership etc as well as making the command recursive so it will descend into directories and copy all of your data.
Lastly we need to make this whole process survive a reboot. To do that we need to edit /etc/fstab
so that the new disk is mounted at boot.
The newer way this is done is to use the disks UUID to refer to it e.g:
# /dev/sdb1
UUID=5663ed29-5992-43b1-a13b-6cd8ac98e434 /home ext4 relatime,errors=remount-ro 0 1
If you want to do that you can use vol_id -u /dev/sdb1
OR blkid
(Karmic and newer) to find out the UUID of a particular drive.
That said this should work just fine:
/dev/sdb1 /home ext4 relatime,errors=remount-ro 0 1
Now reboot and check all it well. Once you are satisfied everything is fine simply remove the old data. Which in my case is as simple as rm -fr /home.bck
.
REFERENCES
http://muffinresearch.co.uk/archives/2010/03/30/adding-more-disk-space-to-a-linux-virtual-machine/