Wednesday, October 24, 2012

Partitioning and Formatting Second Hard Drive in Linux - (ext3 and ext4)

SkyHi @ Wednesday, October 24, 2012

Recently, I bought an external hard disk formatted with NTFS. Not that there is something really wrong with NTFS but I prefer using ext4.
First, I deleted the existing partition and created a new Linux partition using fdisk:
# fdisk /dev/sdb
Assuming /dev/sdb is the external hard disk. Use d to delete the partition and use n to create a new partition. 83 is the ID of the native Linux partition.
Them, I use mkfs.ext4 to format the partition with ext4:
# mkfs.ext4 /dev/sdb1
Note that mkfs.ext4 expects a partition as its argument.
Finally, I use tune2fs to adjust some parameters: // don't need this
# tune2fs -m 0 /dev/sdb1
# tune2fs -L bakap01 /dev/sdb1
The -m option is for adjusting the percentage of reserved blocks. The reserved blocks are used by privileged processes which is by default 5% of the hard disk size. Since I’m using the external hard disk solely as a storage, I set this to 0 so I can also use those 5% for storage. The -L option is for labeling the filesystem.


This article presents the commands used to partition and format a second hard drive in Linux using the ext3 file system. For the purpose of this example, I installed a second hard drive in a Red Had Linux system where the drive is recognized as /dev/hdb. I want to make only one partition on this hard drive which will be /dev/hdb1.


First, you will need to run the fdisk command in order to partition the disk. For this example, I only want to create one ext3 partition. Here is an example session:
[root@racnode1 ~]# fdisk /dev/hdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.

The number of cylinders for this disk is set to 4865.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
   (e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
Partition number (1-4): 1
First cylinder (1-4865, default 1): 1
Last cylinder or +size or +sizeM or +sizeK (1-4865, default 4865): 4865

Command (m for help): t
Partition number (1-4): 1
Hex code (type L to list codes): 83

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

Create ext3 File System

The next step is to create an ext3 file system on the new partition. Provided with the distribution is a script named /sbin/mkfs.ext3. Here is an example session of using themkfs.ext3 script:
[root@racnode1 ~]# mkfs.ext3 -b 4096 /dev/hdb1
mke2fs 1.27 (8-Mar-2002)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
4889248 inodes, 9769520 blocks
488476 blocks (5.00%) reserved for the super user
First data block=0
299 block groups
32768 blocks per group, 32768 fragments per group
16352 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624

Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 36 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

Mounting the File System

Now that the new hard drive is partition and formated, the last step is to mount the new drive. For this example, I will be mounting the new hard drive on the directory /db.
You will first need to create the /db directory before mouting the disk! (e.g. mkdir /db)
Edit the /etc/fstab file and add an entry for the new drive. For my example, I will create the /dev/hdb1 entry as follows:
LABEL=/                 /                       ext3    defaults        1 1
LABEL=/boot             /boot                   ext3    defaults        1 2
none                    /dev/pts                devpts  gid=5,mode=620  0 0
none                    /proc                   proc    defaults        0 0
none                    /dev/shm                tmpfs   defaults        0 0
/dev/hdb1               /db                     ext3    defaults        1 1
/dev/hda2               swap                    swap    defaults        0 0
/dev/cdrom              /mnt/cdrom              iso9660 noauto,owner,kudzu,ro 0 0
/dev/fd0                /mnt/floppy             auto    noauto,owner,kudzu 0 0
After making the entry in the /etc/fstab file, it is now just a matter of mounting the disk:
[root@racnode1 ~]# mount /db

[root@racnode1 ~]# df -k
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/hda3             37191660  11016692  24285724  32% /
/dev/hda1               101089     12130     83740  13% /boot
none                    515524         0    515524   0% /dev/shm
/dev/hdb1             38464340     32828  36477608   1% /db

  237  dmesg
  238  df -h
  239  fdisk -l
  240  fdisk /dev/sdb (n -> p-> default to rest)
  247  mkfs.ext4 /dev/sdb1
  261  mount -t ext4 /dev/sdb1 /opt


Monday, October 22, 2012

Hard Disk: BadCRC errors from dma_intr on bootup

SkyHi @ Monday, October 22, 2012

If dma is enabled on a controller that is not well supported, these errors can appear. ( I had it on a VIA KT266a with kernel 2.2. Upgrading to kernel 2.4 fixed it beautifully.
If you are sure that the IDE controller is supported, the drive is on its way out. You can run fsck with the badblock option turned on to mark these blocks as bad... As a rule, once these errors start, we throw the disk away(This is a high availability production environment).
If you dont mind that the disk can crash in the near future, make a backup and continue using it, it might work for a long time to come.
If the disk is under guarantee... take it back, it is not worth risking data loss if the drive can be replaced for free.
This is how you hunt for and fix badblocks.
# e2fsck -c /dev/hda1
Make sure that you have a backup, badblock scans can destroy data running with certain switches.
# man badblocks && man e2fsck (And read them carefully)
To turn of dma per drive
# hdparm -d0 /dev/hd[a-d]
To list dma settings
# hdparm -d /dev/hd[a-d]
To turn dma on
# hdparm -d1 /dev/hd[a-d]
Where hd[a-d] is hda, hdb, hdc, hdd.