Thursday, October 20, 2011

How to forcibly close a socket in TIME_WAIT?

SkyHi @ Thursday, October 20, 2011

Let me elaborate. Transmission Control Protocol (TCP) is designed to be a bidirectional, ordered, and reliable data transmission protocol between two end points (programs). In this context, the term reliable means that it will retransmit the packets if it gets lost in the middle. TCP guarantees the reliability by sending back Acknowledgment (ACK) packets back for a single or a range of packets received from the peer.
This goes same for the control signals such as termination request/response. RFC 793 defines the TIME-WAIT state to be as follows:
TIME-WAIT - represents waiting for enough time to pass to be sure the remote TCP received the acknowledgment of its connection termination request.
See the following TCP state diagram: alt text
TCP is a bidirectional communication protocol, so when the connection is established, there is not a difference between the client and the server. Also, either one can call quits, and both peers needs to agree on closing to fully close an established TCP connection.
Let's call the first one to call the quits as the active closer, and the other peer the passive closer. When the active closer sends FIN, the state goes to FIN-WAIT-1. Then it receives an ACK for the sent FIN and the state goes to FIN-WAIT-2. Once it receives FIN also from the passive closer, the active closer sends the ACK to the FIN and the state goes to TIME-WAIT. In case the passive closer did not received the ACK to the second FIN, it will retransmit the FIN packet.
RFC 793 sets the TIME-OUT to be twice the Maximum Segment Lifetime, or 2MSL. Since MSL, the maximum time a packet can wander around Internet, is set to 2 minutes, 2MSL is 4 minutes. Since there is no ACK to an ACK, the active closer can't do anything but to wait 4 minutes if it adheres to the TCP/IP protocol correctly, just in case the passive sender has not received the ACK to its FIN (theoretically).
In reality, missing packets are probably rare, and very rare if it's all happening within the LAN or within a single machine.
To answer the question verbatim, How to forcibly close a socket in TIME_WAIT?, I will still stick to my original answer:
/etc/init.d/networking restart
Practically speaking, I would program it so it ignores TIME-WAIT state using SO_REUSEADDR option as WMR mentioned. What exactly does SO_REUSEADDR do?
This socket option tells the kernel that even if this port is busy (in
the TIME_WAIT state), go ahead and reuse it anyway. If it is busy, but with another state, you will still get an address already in use error. It is useful if your server has been shut down, and then restarted right away while sockets are still active on its port. You should be aware that if any unexpected data comes in, it may confuse your server, but while this is possible, it is not likely.

As far as I know there is no way to forcibly close the socket outside of writing a better signal handler into your program, but there is a /proc file which controls how long the timeout takes. The file is

and you can set the timeout to 1 second by doing this:

echo 1 > /proc/sys/net/ipv4/tcp_tw_recycle
However, this page contains a warning about possible reliability issues when setting this variable.

There is also a related file

which controls whether TIME_WAIT sockets can be reused (presumably without any timeout).

Incidentally, the kernel documentation warns you not to change either of these values without 'advice/requests of technical experts'. Which I am not.

The program must have been written to attempt a binding to port 49200 and then increment by 1 if the port is already in use. Therefore, if you have control of the source code, you could change this behaviour to wait a few seconds and try again on the same port, instead of incrementing.



SkyHi @ Thursday, October 20, 2011
# -------------------------------------------------------
# Some common sense basics to secure Debian Linux servers

# installing extra security packages

apt-get install denyhosts tiger rkhunter chkrootkit snort oinkmaster checksecurity logcheck logwatch fcheck logcheck-database syslog-summary tripwire

# after downloading and installing, build the tripwire database:

tripwire --init

# most of the tools send e-mail to root@localhost, make sure to redirect this to a working e-mail address:

echo "root:" >> /etc/aliases

# Download the 'sysctl.conf' provided here, place it in /etc and run:

wget -O /etc/sysctl.conf
sysctl -e -p /etc/sysctl.conf

# Download the 'rc.iptables' save it to /etc/init.d and edit it to only open the desired ports for your server you really need, after that do:

wget -O /etc/init.d/rc.iptables
chmod 755 /etc/init.d/rc.iptables
update-rc.d rc.iptables defaults
/etc/init.d/rc.iptables start &

# Get automatic security updates

apt-get install cron-apt unattended-upgrades

# Do some virusscanning to make sure there are no unwanted files on your server system:

apt-get install clamav clamav-daemon clamav-freshclam
clamscan --infected --recursive --no-summary /

# You could also do this on a daily basis and add it as cronjob:

echo "13 5 * * * clamscan --infected --recursive --no-summary /" >> /var/spool/cron/crontabs/root

# remove or take away permissions of all system tools that can be used to download files at the command-line (like lynx and wget)

chmod 700 /usr/bin/wget /usr/bin/curl /usr/bin/GET /usr/bin/ftp /usr/bin/telnet
dpkg -P lynx links

# Search for other installations of these tools and remove or disable them for normal users

whereis wget curl GET links lynx ftp telnet

# Monitor your user cron-jobs and look for suspicious commands

cat /var/spool/cron/crontabs/*

# In case you do not want your users to use cron-jobs, you can disable them all (exept for the root user) using the following commands

echo root > /etc/cron.allow
/etc/init.d/cron restart

# Let the server fix it's filesystem automatically when errors are found

echo "FSCKFIX=yes" >> /etc/defaults/rcS

# --------------------------------------------------
# Adding webserver software specific security tweaks:

# use apache mod_security (

# use the suexec tool to limit permissions of CGI scripts

# use SuPHP to limit permissions of PHP scripts

# For PHP edit php.ini and set the following options:

allow_url_fopen = Off
allow_url_include = Off
register_globals = Off

# PHP safe_mode will add some extra limitations, see
# use PHP option safe_mode = On , or disable a list of common abused php functions that are rarely used by legitimate php software packages:

disable_functions = dl,system,exec,passthru,shell_exec,proc_open,proc_get_status,proc_terminate,proc_close,dir,readfile,virtual,popen,curl_exec,curl_multi_exec,parse_ini_file,show_source

# install the php hardening patch available at and you might also try the suhosin patch available at the same website.

# set php option open_basedir for every website limiting them to their own user home-dir to prevent php scripts get access to other users and websites at the system.

# Your server is now a bit more secure, but you still have to keep an eye at your users and make sure they do not upload and use insecure/buggy/old software packages


Apache optimization: KeepAlive On or Off pros and cons?

SkyHi @ Thursday, October 20, 2011
Apache is the most widely used web server on the Internet. Knowing how to get the most out of Apache is very important for a systems administrator. Optimizing Apache is always a balancing act. It’s a case of sacrificing one resource in order to obtain savings in another.

Apache optimization: KeepAlive On or Off?

May 1st, 2011 Apache
Apache is the most widely used web server on the Internet. Knowing how to get the most out of Apache is very important for a systems administrator. Optimizing Apache is always a balancing act. It’s a case of sacrificing one resource in order to obtain savings in another.

What is KeepAlive?
HTTP is a session less protocol. A connection is made to transfer a single file and closed once the transfer is complete. This keeps things simple but its not very efficient.
To improve efficiency something called KeepAlive was introduced. With KeepAlive the web browser and the web server agree to reuse the same connection to transfer multiple files.

Advantages of KeepAlive

  • Improves website speed: It reduces latency associated with HTTP transfers and delivers a better user experience.
  • Reduces CPU usage: On the server side enabling KeepAlive reduces CPU usage. Consider that a typical web page has dozens of different files such as images, stylesheets, javascript files etc. If KeepAlive is disabled a separate connection must be made for each of those files. Creating and closing connections has an overhead and doing it for every single file wastes CPU time.

Disadvantages of Keepalive

  • Increases memory usage: Enabling KeepAlive  increases memory usage on the server. Apache processes have to keep connections open waiting for new requests from established connections. While they are waiting they are occupying RAM that could be used to service other clients. If you turn off KeepAlive fewer apache processes will remain active. This will lower memory usage and allow Apache to serve more users.

When should you enable KeepAlive?

Deciding whether to enable KeepAlive or not depends on a number of different factors:
  • Server resources: How much RAM vs. how much CPU power you have? RAM is often the biggest limiting factor in a webserver. If you have little RAM you should turn off KeepAlive because having Apache processes hanging around while they wait for more requests from persistent connections is a waste of precious memory. If CPU power is limited then you want KeepAlive on because it reduces CPU load.
  • Types of sites: If you have pages with a lot of images or other files linked into them, KeepAlive will improve the user experience significantly. This is because a single connection will be used to transfer multiple files.
  • Traffic patterns: The type of traffic you get. If your web traffic is spread out evenly throughout a day then you should turn on KeepAlive. OTOH, if you have bursty traffic where a lot of concurrent users access your sites during a short time period KeepAlive will cause your RAM usage to skyrocket so you should turn it off.

Configure Apache KeepAlive settings

Open up apache’s configuration file and look for the following settings. On Centos this file is called httpd.conf and is located in /etc/httpd/conf. The following settings are noteworthy:
  • KeepAlive: Switches KeepAlive on or off. Put in “KeepAlive on” to turn it on and “KeepAlive off” to turn it off.
  • MaxKeepAliveRequests: The maximum number of requests a single persistent connection will service. A number between 50 and 75 would be plenty.
  • KeepAliveTimeout: How long should the server wait for new requests from connected clients. The default is 15 seconds which is way too high. Set it to between 1 and 5 seconds to avoid having processes wasting RAM while waiting for requests.
Other settings
KeepAlive affects other settings in your Apache configuration file even though they are not directly related. Here are the settings in an Apache prefork MPM webserver:
  • MaxClients: MaxClients is the maximum number of child processes launched by Apache to service incoming requests. With KeepAlive enabled you have will have a higher number of child processes active during peak times. So your MaxClients value may have to be increased.
  • MaxRequestsPerChild: The number of requests a child process will serve before it is killed and recreated. This is done to prevent memory leaks. When KeepAlive is turned on each persistent connection will count as one request. That effectively turns MaxRequestsPerChild into a maximum connections per child value. As a result you can set a lower MaxRequestsPerChild value if you allow KeepAlive. If you don’t allow KeepAlive you should increase the MaxRequestsPerChild value to prevent excessive CPU usage.

Final thoughts

There is no one universal solution to tuning Apache. It all depends on the resources at your disposal and the type of sites you have. When used properly KeepAlive can improve the user experience at minimal cost in terms of server resources. But it can also be a liability when you are faced with a large number of concurrent users.


What is the exact difference between a ‘terminal’, a ‘shell’, a ‘tty’ and a ‘console’?

SkyHi @ Thursday, October 20, 2011

A great answer by Gilles on
A terminal is at the end of an electric wire, a shell is the home of a turtle, tty is a strange abbreviation and a console is a kind of cabinet.
Well, etymologically speaking, anyway.
In unix terminology, the short answer is that
  • terminal = tty = text input/output environment
  • console = physical terminal
  • shell = command line interpreter

Console, terminal and tty are closely related. Originally, they meant a piece of equipment through which you could interact with a computer: in the early days of unix, that meant a teleprinter-style device resembling a typewriter, sometimes called a teletypewriter, or “tty” in shorthand. The name “terminal” came from the electronic point of view, and the name “console” from the furniture point of view. Very early in unix history, electronic keyboards and displays became the norm for terminals.
In unix terminology, a tty is a particular kind of device file which implements a number of additional commands (ioctls) beyond read and write. In its most common meaning, terminal is synonymous with tty. Some ttys are provided by the kernel on behalf of a hardware device, for example with the input coming from the keyboard and the output going to a text mode screen, or with the input and output transmitted over a serial line. Other ttys, sometimes called pseudo-ttys, are provided (through a thin kernel layer) by programs called terminal emulators, such as Xterm (running in the X Window System), Screen (which provides a layer of isolation between a program and another terminal), Ssh(which connects a terminal on one machine with programs on another machine), Expect (for scripting terminal interactions), etc.
The word terminal can also have a more traditional meaning of a device through which one interacts with a computer, typically with a keyboard and display. For example an X terminal is a kind of thin client, a special-purpose computer whose only purpose is to drive a keyboard, display, mouse and occasionally other human interaction peripherals, with the actual applications running on another, more powerful computer.
console is generally a terminal in the physical sense that is by some definition the primary terminal directly connected to a machine. The console appears to the operating system as a (kernel-implemented) tty. On some systems, such as Linux and FreeBSD, the console appears as several ttys (special key combinations switch between these ttys); just to confuse matters, the name given to each particular tty can be “console”, ”virtual console”, ”virtual terminal”, and other variations.

shell is the primary interface that users see when they log in, whose primary purpose is to start other programs. (I don’t know whether the original metaphor is that the shell is the home environment for the user, or that the shell is what other programs are running in.)
In unix circles, shell has specialized to mean a command-line shell, centered around entering the name of the application one wants to start, followed by the names of files or other objects that the application should act on, and pressing the Enter key. Other types of environments don’t use the word “shell”; for example, window systems involve “window managers” and “desktop environments”, not a “shell”.
There are many different unix shells, but the most common ones have a common syntax based on theBourne_shell. When discussing “shell programming”, the shell is almost always implied to be a Bourne-style shell.
In unix system administration, a user’s shell is the program that is invoked when they log in. Normal user accounts have a command-line shell, but users with restricted access may have a restricted shell or some other specific command (e.g. for file-transfer-only accounts).
The division of labor between the terminal and the shell is not completely obvious. Here are their main tasks.
  • Input: the terminal converts keys into control sequences (e.g. Left → \e[D). The shell converts control sequences into commands (e.g. \e[D → backward-char).
  • Line edition, input history and completion are provided by the shell.
    • The terminal may provide its own line edition, history and completion instead, and only send a line to the shell when it’s ready to be executed. The only common terminal that operates in this way is M-x shell in Emacs.
  • Output: the shell emits instructions such as “display foo”, “switch the foreground color to green”, “move the cursor to the next line”, etc. The terminal acts on these instructions.
  • The prompt is purely a shell concept.
  • The shell never sees the output of the commands it runs (unless redirected). Output history (scrollback) is purely a terminal concept.
  • Inter-application copy-paste is provided by the terminal (usually with the mouse or key sequences such as Ctrl+Shift+V or Shift+Insert). The shell may have its own internal copy-paste mechanism as well (e.g. Meta+W and Ctrl+Y).

Wednesday, October 19, 2011

Mac OS can't connect to SMB shares after sleep

SkyHi @ Wednesday, October 19, 2011

How to find a backdoor in a hacked WordPress

SkyHi @ Wednesday, October 19, 2011

Tuesday, October 18, 2011

Use terminal scrollbar with tmux and scrollback

SkyHi @ Tuesday, October 18, 2011

This is possible in both Screen and in tmux and the workaround is similar: to fool the multiplexers into thinking that the terminal has no "alternate screen" mode (such as that used by pico, mutt, etc). This is accomplished by setting termcap commands for the session. For GNU screen, put this in your .screenrc:
termcapinfo xterm*|xs|rxvt|terminal ti@:te@
and for tmux, add this to your .tmux.conf:
set -g terminal-overrides 'xterm*:smcup@:rmcup@'
The 'xterm*' part of the command should be set to whatever your terminal-emulator is declared as. Some form of xterm is a good guess, but you can check your on most sane *nix systems with:
and this can usually be set in the preferences of your terminal program (ie: For Apple's, it's in Settings->Advanced->Emulation->"Declare terminal as"
The end result is that the overflow ends up in the terminal's scrollback buffer instead of disappearing. Of course, since this is one static buffer, things will get messy as you switch between screen or tmux windows, but this is handy for quickly flicking up to see the output of an ls command or the such.

Add the following to your ~/.screenrc:
termcapinfo xterm ti@:te@
termcapinfo xterm-color ti@:te@
This will let you use the scrollbar instead of relying on screen's scrollback buffer.

Scrollback for tmux
Ctrl-A + PageUp or PageDown


Yum Command Examples – Install, Uninstall, Update Packages

SkyHi @ Tuesday, October 18, 2011

Installing, removing, and updating packages is a typical activity on Linux. Most of the Linux distributions provides some kind of package manager utility. For example, apt-get, dpkg, rpm, yum, etc.
On some Linux distributions, yum is the default package manager.
Yum stands for Yellowdog Updater Modified.
This article explains 15 most frequently used yum commands with examples.

1. Install a package using yum install

To install a package, do ‘yum install packagename’. This will also identify the dependencies automatically and install them.
The following example installs postgresql package.
# yum install postgresql.x86_64
Resolving Dependencies
Install       2 Package(s)
Is this ok [y/N]: y

Package(s) data still to download: 3.0 M
(1/2): postgresql-9.0.4-5.fc15.x86_64.rpm          | 2.8 MB     00:11
(2/2): postgresql-libs-9.0.4-5.fc15.x86_64.rpm    | 203 kB     00:00
Total                                        241 kB/s | 3.0 MB     00:12     

Running Transaction
  Installing : postgresql-libs-9.0.4-5.fc15.x86_64             1/2
  Installing : postgresql-9.0.4-5.fc15.x86_64                   2/2 

By default ‘yum install’, will prompt you to accept or decline before installing the packages. If you want yum to install automatically without prompting, use -y option as shown below.
# yum -y install postgresql.x86_64

2. Uninstall a package using yum remove

To remove a package (along with all its dependencies), use ‘yum remove package’ as shown below.
# yum remove  postgresql.x86_64
Resolving Dependencies
---> Package postgresql.x86_64 0:9.0.4-5.fc15 will be erased

Is this ok [y/N]: y

Running Transaction
  Erasing    : postgresql-9.0.4-5.fc15.x86_64       1/1 

  postgresql.x86_64 0:9.0.4-5.fc15


3. Upgrade an existing package using yum update

If you have a older version of a package, use ‘yum update package’ to upgrade it to the latest current version. This will also identify and install all required dependencies.
# yum update postgresql.x86_64

4. Search for a package to be installed using yum search

If you don’t know the exact package name to be installed, use ‘yum search keyword’, which will search all the packages that matches the ‘keyword’ and display it.
The following examples searches the yum repository for all the packages that matches the keyword ‘firefox’ and lists the available packages.
# yum search firefox
Loaded plugins: langpacks, presto, refresh-packagekit
============== N/S Matched: firefox ======================
firefox.x86_64 : Mozilla Firefox Web browser
gnome-do-plugins-firefox.x86_64 : gnome-do-plugins for firefox
mozilla-firetray-firefox.x86_64 : System tray extension for firefox
mozilla-adblockplus.noarch : Adblocking extension for Mozilla Firefox
mozilla-noscript.noarch : JavaScript white list extension for Mozilla Firefox

Name and summary matches only, use "search all" for everything.

5. Display additional information about a package using yum info

Once you search for a package using yum search, you can use ‘yum info package’ to view additional information about the package.
The following examples displays additional information about the samba-common package.
# yum info samba-common.i686
Loaded plugins: langpacks, presto, refresh-packagekit
Available Packages
Name        : samba-common
Arch        : i686
Epoch       : 1
Version     : 3.5.11
Release     : 71.fc15.1
Size        : 9.9 M
Repo        : updates
Summary     : Files used by both Samba servers and clients
URL         :
License     : GPLv3+ and LGPLv3+
Description : Samba-common provides files necessary for both the server and client
            : packages of Samba.

6. View all available packages using yum list

The following command will list all the packages available in the yum database.
# yum list | less

7. List only the installed packages using yum list installed

To view all the packages that are installed on your system, execute the following yum command.
# yum list installed | less

8. Which package does a file belong to? – Use yum provides

Use ‘yum provides’ if you like to know which package a particular file belongs to. For example, if you like to know the name of the package that has the /etc/sysconfig/nfs file, do the following.
# yum provides /etc/sysconfig/nfs
Loaded plugins: langpacks, presto, refresh-packagekit
1:nfs-utils-1.2.3-10.fc15.x86_64 : NFS utilities and supporting clients and
                                 : daemons for the kernel NFS server
Repo        : fedora
Matched from:
Filename    : /etc/sysconfig/nfs

1:nfs-utils-1.2.4-1.fc15.x86_64 : NFS utilities and supporting clients and
                                : daemons for the kernel NFS server
Repo        : updates
Matched from:
Filename    : /etc/sysconfig/nfs

1:nfs-utils-1.2.4-1.fc15.x86_64 : NFS utilities and supporting clients and
                                : daemons for the kernel NFS server
Repo        : installed
Matched from:
Other       : Provides-match: /etc/sysconfig/nfs

9. List available software groups using yum grouplist

In yum, several related packages are grouped together in a specific group. Instead of searching and installing all the individual packages that belongs to a specific function, you can simply install the group, which will install all the packages that belongs to the group.
To view all the available software groups execute ‘yum grouplist’ as shown below. The output is listed in three groups–Installed Groups, Installed Language Groups and Available Groups.
# yum grouplist

Installed Groups:
   Administration Tools
   Design Suite

Installed Language Groups:
   Arabic Support [ar]
   Armenian Support [hy]
   Bengali Support [bn]

Available Groups:
   Authoring and Publishing
   Books and Guides
   DNS Name Server
   Development Libraries
   Development Tools
   Directory Server
   Dogtag Certificate System

10. Install a specific software group using yum groupinstall

To install specific software group, use groupinstall option as shown below. In the following example, ‘DNS Name Server’ group contains bind and bind-chroot.
# yum groupinstall 'DNS Name Server'

Dependencies Resolved
Install       2 Package(s)
Is this ok [y/N]: y

Package(s) data still to download: 3.6 M
(1/2): bind-9.8.0-9.P4.fc15.x86_64.rpm             | 3.6 MB     00:15
(2/2): bind-chroot-9.8.0-9.P4.fc15.x86_64.rpm   |  69 kB     00:00
Total               235 kB/s | 3.6 MB     00:15

  bind-chroot.x86_64 32:9.8.0-9.P4.fc15

Dependency Installed:
  bind.x86_64 32:9.8.0-9.P4.fc15

Note: You can also install MySQL database using yum groupinstall as we discussed earlier.

11. Upgrade an existing software group using groupupdate

If you’ve already installed a software group using yum groupinstall, and would like to upgrade it to the latest version, use ‘yum groupupdate’ as shown below.
# yum groupupdate 'Graphical Internet'

Dependencies Resolved
Upgrade       5 Package(s)
Is this ok [y/N]: y   

Running Transaction
  Updating   : evolution-data-server-3.0.2-1.fc15.x86_64     1/10
  Updating   : evolution-3.0.2-3.fc15.x86_64                 2/10
  Updating   : evolution-NetworkManager-3.0.2-3.fc15.x86_64  3/10
  Updating   : evolution-help-3.0.2-3.fc15.noarch            4/10
  Updating   : empathy-3.0.2-3.fc15.x86_64                   5/10
  Cleanup    : evolution-NetworkManager-3.0.1-1.fc15.x86_64  6/10
  Cleanup    : evolution-help-3.0.1-1.fc15.noarch            7/10
  Cleanup    : evolution-3.0.1-1.fc15.x86_64                 8/10
  Cleanup    : empathy-3.0.1-3.fc15.x86_64                   9/10
  Cleanup    : evolution-data-server-3.0.1-1.fc15.x86_64     10/10 


12. Uninstall a software group using yum groupremove

To delete an existing software group use ‘yum groupremove’ as shown below.
# yum groupremove 'DNS Name Server'
Dependencies Resolved
Remove        2 Package(s)
Is this ok [y/N]: y

Running Transaction
  Erasing    : 32:bind-chroot-9.8.0-9.P4.fc15.x86_64  1/2
  Erasing    : 32:bind-9.8.0-9.P4.fc15.x86_64            2/2 


13. Display your current yum repositories

All yum commands goes against one or more yum repositories. To view all the yum repositories that are configured in your system, do ‘yum repolist’ as shown below.
The following will display only the enabled repositories.
# yum repolist
repo id     repo name                        status
fedora      Fedora 15 - x86_64               24,085
updates     Fedora 15 - x86_64 - Updates     5,612
To display all the repositories (both enabled and disabled), use ‘yum repolist all’.
# yum repolist all
repo id                   repo name                                status
fedora                    Fedora 15 - x86_64                       enabled: 24,085
fedora-debuginfo          Fedora 15 - x86_64 - Debug               disabled
fedora-source             Fedora 15 - Source                       disabled
rawhide-debuginfo         Fedora - Rawhide - Debug                 disabled
rawhide-source            Fedora - Rawhide - Source                disabled
updates                   Fedora 15 - x86_64 - Updates             enabled:  5,612
updates-debuginfo         Fedora 15 - x86_64 - Updates - Debug     disabled
updates-source            Fedora 15 - Updates Source               disabled
updates-testing           Fedora 15 - x86_64 - Test Updates        disabled
updates-testing-debuginfo Fedora 15 - x86_64 - Test Updates Debug  disabled
updates-testing-source    Fedora 15 - Test Updates Source          disabled
To view only the disabled repositories, use ‘yum repositories disabled’.

14. Install from a disabled repositories using yum –enablerepo

By default yum installs only from the enabled repositories. For some reason if you like to install a package from a disabled repositories, use –enablerepo option in the ‘yum install’ as shown below.
# yum --enablerepo=fedora-source install vim-X11.x86_64
Dependencies Resolved
Install       1 Package(s)
Is this ok [y/N]: y

Running Transaction
  Installing : 2:vim-X11-7.3.138-1.fc15.x86_64   1/1 


15. Execute yum commands interactively using Yum Shell

Yum provides the interactive shell to run multiple commands as shown below.
# yum shell
Setting up Yum Shell
> info samba.x86_64
Available Packages
Name        : samba
Arch        : x86_64
Epoch       : 1
Version     : 3.5.11
Release     : 71.fc15.1
Size        : 4.6 M
Repo        : updates
Summary     : Server and Client software to interoperate with Windows machines
URL         :
License     : GPLv3+ and LGPLv3+
Description :
            : Samba is the suite of programs by which a lot of PC-related
            : machines share files, printers, and other information (such as
            : lists of available files and printers). The Windows NT, OS/2, and
            : Linux operating systems support this natively, and add-on packages
            : can enable the same thing for DOS, Windows, VMS, UNIX of all
            : kinds, MVS, and more. This package provides an SMB/CIFS server
            : that can be used to provide network services to SMB/CIFS clients.
            : Samba uses NetBIOS over TCP/IP (NetBT) protocols and does NOT
            : need the NetBEUI (Microsoft Raw NetBIOS frame) protocol.

Yum can also read commands from a text file and execute it one by one. This is very helpful when you have multiple systems. Instead of executing the same command on all the systems, create a text file with those commands, and use ‘yum shell’ to execute those commands as shown below.
# cat yum_cmd.txt
info nfs-utils-lib.x86_64

# yum shell yum_cmd.txt 
repo id     repo name                        status
fedora      Fedora 15 - x86_64               24,085
updates     Fedora 15 - x86_64 - Updates     5,612

Available Packages
Name        : nfs-utils-lib
Arch        : x86_64
Version     : 1.1.5
Release     : 5.fc15
Size        : 61 k
Repo        : fedora
Summary     : Network File System Support Library
URL         :
License     : BSD
Description : Support libraries that are needed by the commands and
            : daemons the nfs-utils rpm.

Leaving Shell

Monday, October 17, 2011

Reduce High CPU usage overload problem caused by MySql

SkyHi @ Monday, October 17, 2011

In the recent years, many of us were facing the problem of  "database overload" and "High CPU usage overload" problem. After a benchmark testing about this issue, I reach at point where I found, there were too many database connections, unnecessary queries execution and unnecessary HTTP request. So, Today I am really interested to share my little knowledge with you.

I feel this problem when I have made penny auction. If you are made swoopo/madbid clone or similar type of penny/Live auction then I'm sure you guys used Ajax or jQuery function which you made to call in a second because there must be display recent updated data like winner name, his bids and auction countdown timer without any page refresh. If you are in product details page then there must be display recent 10 bids history without any refresh.

In the similar type of web development, what I have realized that there were too many database connections, unnecessary query execution and unnecessary HTTP request. These problems were creating the database overload problem and then CPU usage goes to 100%. Why?

Because there might be call PHP page in each second through JavaScript to update recent bidding information. Now suppose if there are 1000 users in your website, 1000 database connection was created and if you have made 5 queries for getting updated bidding information, then 1000*5= 5000 queries are execute in a second which is big problem for any server and due to these causes High CPU usage overload problem, database overload problem and too many database connection problem occurs. These things happen when users open one page of your website if they are open 2 or 3 or more then you imagine how many connections will open and queries will execute in a second.
Now your question is how I have resolved these problems?

Well! First of all we have to completely remove database connection by using file handling and accordingly this we need to maintain some functions.

Second, I have taken current updated bidder information and bid history from database when user place a bid. So, there are only one connection happen not multiple. Due to the bidbutler or autobid causes also I have face overload problem so I have remove this function from frontend and handle it from cron scheduler.

Now the main query is how to reduce High CPU usage overload problem cause by MySql?

Here are some of the following points which will help you to reduce High CUP usage overload or Database overload problem.
  1. Establishes a persistent connection: Persistent connection (mysql_pconnect) gives two major benefits than MySql connection. First, when connecting, the function would first try to find a (persistent) link that's already open with the same host, username and password. If one is found, an identifier for it will be returned instead of opening a new connection. Second, the connection to the SQL server will not be closed when the execution of the script ends. Instead, the link will remain open for future use (mysql_close() will not close links established by mysql_pconnect()).
  2. Make a database connection and closed connection: Make database connection at the top of the page and closed connection in the bottom of the page.
  3. Change database type from MyISAM to Innodb: we also need to change MyISAM to Innodb because InnoDB supports some newer features: Transactions, row-level locking, foreign keys. InnoDB is for high volume, high performance.
  4. Create Temporary Tables: The best place to use temporary tables is when we need to pull a bunch of data from multiple tables. In the penny auction, you need when you display bid history where we required to display only last 10 bidder history and latest updated winner information like auction price, winner name and auction end date.
  5. Select only those elements from database which are required: Don't select all the values from all table, only select that element which we required.
  6. Optimize database query:Optimizing your queries can help them run more efficiently, which can save a significant amount of time.
  7. MySQL Query Cache: MySQL Query Cache is simply speed up your query performance. As we know, speed is always the most important element in developing a website especially for those high traffic database driven website. You can try to turn on query cache to speed up query. Whenever query cache is enable, it will cache the query in memory and boost query performance.
  8. Create indexes (Single or combine) based on requirement: Indexes are used for reading the data from the table with comparatively faster execution time.
  9. Distribute Cron scheduler load: Never handle entire function from one cron scheduler. Try to distribute in multiple scheduler.
  10. Check your server configuration: Due to the low server configuration or distributed server also you have face high CPU usage problem or database overloaded problem. (check query_cache_size and tmp_table_size haven't been set) 


LAMP Server Tuning

SkyHi @ Monday, October 17, 2011

Getting the most from your Drupal site means getting the most from your server – optimizing the various layers of the the LAMP stack. This includes the filesystem, database, web server, PHP, RAM and CPU. Tuning the LAMP stack is a major subject requiring a lot of study and practice to become proficient. It’s something you will probably never completely master :) Try Googling lamp performance tune for a few articles to whet your appetite. For now, we’ll cover a few of the major considerations for Drupal, although most of this advice would apply to any PHP web app running on Linux.
Server tuning considerations
Drupal documentation covering the basics.
Tuning LAMP systems, Part 1: Understanding the LAMP architecture
Intermediate article covering LAMP.
Tuning LAMP systems, Part 2: Optimizing Apache and PHP
Intermediate article covering Apache and PHP.
Tuning LAMP systems, Part 3: Tuning your MySQL server
Intermediate article covering MySQL.

Opcode cache

Opcode caches cache the compiled form of a PHP script in shared memory to avoid the overhead of parsing and compiling the code every time the script runs. This saves RAM and reduces script execution time.
Quite a bit of benchmarking has been done in the Drupal and PHP communities betweenAPCeAccelerator and XCache. eAccelerator may have the edge in raw performance, but it appears that APC is the preferred opcode cache in the Drupal community because it is well maintained and less buggy.
All sites: faster and less RAM. Moderate install.
Drupal web server configurations compared
APC gives 2x to 4x increase in throughput under load. PHP5 is around 10% slower.
PHP op-code caches / accelerators: Drupal large site case study
Op-code caches are a must for large sites serving many pages.
Benchmarking APC vs. eAccelerator using Drupal
eAccelerator is faster and smaller than APC. Both offer around 6x – 7x times speedup over PHP.
High PHP execution times for Drupal, and tuning APC for include_once() performance
Make sure apc.shm_size can fit the whole page else there will be no caching.


There are a number of choices to be made when tuning your MySQL database server. The MySQLTuner script can be helpful for identifying outstanding issues you may be unaware of. It can be run on a functioning production server to see how your database is performing in the wild. It’s possible to take a best guess at config options on your dev machine but you aren’t going to know how things are going to shape up until real users start hitting the DB.
Perl script which is able to report on the operation of your MySQL installation and offer suggestions as to what can be fixed.
Tuning MySQL Performance with MySQLTuner
Helpful tutorial.


A default install of Drupal 6 installs the DB tables as MyISAM. This will change in Drupal 7 with the default set to InnoDB. A Drupal 6 installation may well have some InnoDB tables as modules may create new tables in the InnoBD engine. Your installation may therefore be a mix between the two engines.
In many places on the web you will read statements such as ‘All high performance Drupal sites run InnoDB”. This is not necessarily so as there are some cases where MyISAM may still be preferred although with recent changes to Drupal core the pendulum has swung to InnoDB as a sensible default.
A list of the main difference between the engines is as follows:/p>
  • InnoDB is transactional (better integrity), MyISAM isn’t
  • InnoDB more reliable (better recovery), MyISAM can be repaired
  • InnoDB has row level locking (better concurrency), MyISAM locks tables
  • InnoDB uses clustered indexes (faster access to data), MyISAM indexes just the keys
  • InnoDB has a bigger memory footprint
In general, you would consider sticking with MyISAM if
  • Memory footprint was an issue. If you have very big indexes which might only just fit into the key buffer then MyISAM could offer faster lookups.
  • Most activity is read only.
InnoDB tables definitely should be used for all of the Drupal cache tables since this is where most contention is likely to occur.
Finally, it must be noted that Drupal was written based on the MyISAM engine and as such many queries were not optimized for InnoDB. The SELECT COUNT(*) is particularly slow in InnoDB because it must scan all rows to calculate the count. Many of these shortcomings have been removed in the PressFlow distribution and have since made their way back into core.
All sites: InnoDB for less contention on cache
Most sites: InnoDB for everything else
Big unchanging sites: MyISAM faster reads less RAM
MySQL Engines: MyISAM vs. InnoDB
InnoDB is a good fit for many cases and “in most cases, InnoDB is the correct choice for a Drupal site”. Very good comparison between the two engines.
MySQL InnoDB: performance gains as well as some pitfalls
InnoDB does row level locking but lookup is slower for some slow queries. NB. Pressflow distribution fixes some slow InnoDB queries.
InnoDB vs MyISAM vs Falcon benchmarks – part 1
Myth that MyISAM is faster than InnoDB in all cases.
Which Tables can be converted to InnoDB
High Performance discussion emphasizing that InnoDB should definitely be used for cache tables and complex joins in CCK if memory allows.


There are a number of MySQL config variables which must be tweaked to suit your data. It is impossible to specify one set of options to suit all sites. A few rules of thumb are offered below.
Optimizing the mysqld variables
Clear article with some good rules of thumb for MySQL variables.


If you are running MyISAM tables then the key buffer is a very important variable to set. The key buffer stores table indexes in memory, allowing for fast lookups and joins. For large node, node_version and url_alias tables it is a must to have enough room to fit these tables into memory, otherwise your site will very slow on the most basic of operations: looking up nodes, titles and paths.
One rule of thumb is to set this buffer to somewhere between 25% and 50% of the memory on the server. To determine the best value up front sum the size of all the .MYI files.
MyISAM sites: most queries faster. Essential.
Documentation on the use of key_buffer_size.


MySQL has a query cache which stores results up to a certain size in memory. The cache is very handy for quickly returning commonly accessed data when all other forms of caching (reverse proxies, page cache, Drupal caches) have not been invoked. Queries which may take sometime return almost instantly.
MySQL’s Query Cache
Covers config and operation of the query cache.
During the development and testing of a site the query cache can catch developers out since a query may appear to be performing quite well the second and subsequent times through. To really test a query you need to fire up mysql client (or phpmyadmin) and add the SQL_NO_CACHE option to the query to see the real time it takes. Don’t be fooled!
Query Cache SELECT Options
Documentation on the use of SQL_NO_CACHE.
The query cache is destroyed if any row in the table is changed and so it cannot be relied upon if tables are changing frequently. The cache shines when the are big tables which don’t change that often. Unless your site has such characteristics it is best to limit it so that it fits small unchanging tables and then some for the most popular queries. Examination of cache hit rates will show you if it needs to be extended or reduced.
Documentation on the use of query_cache_size.
All sites: common queries faster


If you are running InnoDB tables then it is essential to optimize the InnoDB Buffer Pool Size, increasing the memory to reduce query time. InnoDB is more memory intensive and so the pool will be larger than that used for MyISAM tables. MySQL documentation suggests that the size can be upped to 80% of physical memory. Anymore could lead to swap issues.
InnoDBsites: most queries faster. Essential.
Documentation on the use of innodb_buffer_pool_size.


Other variables worth tweaking include the following. See Optimizing the mysqld variablesfor more.
  • table cache
  • sort buffer
  • read_rnd_buffer_size
  • tmp_table_size


A warm database will perform much better than a recently started one because its caches and buffers will be primed with keys and data. It therefore makes sense to warm up a DB every time the database is restarted. The best way to do this is to load in the indexes of commonly used tables. This guide recommends loading in node, node_revisions and url_alias. Taxonomy information could be good candidates as well.

USE drupal6;
LOAD INDEX INTO CACHE node_revisions;
This SQL code can then be put in a script and run when MySQL restarts. It is possible to configure the init_file variable in my.cnf to tell mysql where to find the startup SQL.
init-file = /etc/mysql/init-file.sql
Many nodes: Most queries where index relied upon.
Index Preloading
How to use init_file variable to specify startup SQL.


Indexes on columns can dramatically speed up queries if the columns are used for filtering, sorting or joining. Generally, Drupal has most of the indexes you need covered, however, there are some areas where standard tables can benefit from an additional index. It is recommended that you profile your queries to see where things are slow before adding indexes in a scattergun approach because adding indexes can harm performance if they are not being used properly. You can use MySQL’s slow query log for queries with no index to identify areas for improvement.
mikeytown2 has come up with a list of tables which could do with an index:
  • All CCK fields that you use in a view. File Field: create an index on the fid; date: index on date; index on value; etc…
  • access: type, mask, status
  • comments: timestamp
  • node_comment_statistics: comment_count
  • menu_links: external, updated, customized, depth
  • users: pass, status
  • menu_custom: title
  • date_format_types: title
  • filter_formats: roles
  • content_group: weight, type_name, group_name
  • term_data: name
  • system: name
  • imagecache_preset: presetname
  • blocks: module, delta
  • system: status, type
  • content_node_field: type, widget_type

Web server

Apache + MPM Prefork + mod_php is the default web server configuration in the LAMP stack. This combination does consume large amounts of RAM which can be a problem for handling many requests. It can also be quite heavy and slow for serving static content. Many administrators have looked to replace it with other combinations including multithreaded processes (MPM Worker) and external PHP (mod_fcgid) as well as swapping it out completely for another server such as Nginx. This guide has adopted the position that Apache problems can be ameliorated somewhat by removing unneeded modules, running fcgid to connect with PHP and using MPM Worker to enable multithreading per process. However, in some cases this won’t be enough and Nginx is a must.


Other Drupal users have replaced Apache with faster more lightweight (RAM and CPU) web servers such as Nginx and Lighttpd. Nginx is generally preferred over Lighttpd because of memory leaks in the latter. It is currently possible to run Nginx without losing any functionality in Drupal. Boost, a module based on .htaccess rules, now supports Nginx so it is feasible to run Nginx as the main web server. If you are constrained by CPU or have high loads then this certainly is an option worth considering.
Setting up Nginx is not trivial but it is reasonably straight forward if you are comfortable with compiling and patching. There are some good tutorials on the Web for user who want to do this.
Low resources, High Traffic, Many logged in: Possible to get more for less with Nginx.
Apache vs Nginx : Web Server Performance Deathmatch
“Nginx seems to compete pretty well with Apache and there doesn’t seem like there is a good reason not to use it especially in CPU usage constrained situations (ie. huge traffic, slow machines and etc).”
In reply to kbahey: apache vs nginx
Discussion and results over the pros and cons of Nginx vs various Apache setups.
How to get Drupal working with Nginx
Simple guide for installing and configuring Nginx on a server with only 256MB RAM. Uses FastCGI which may not be preferred method.
NGINX + PHP-FPM + APC = Awesome
“The following guide will walk you through setting up possibly the fastest way to serve PHP known to man…In this article, we’ll be installing nginx http server, PHP with the PHP-FPM patches, as well as APC.”
PHP-FPM – A simple and robust FastCGI Process Manager for PHP
Preferred way of connecting Nginx with PHP. Currently in PHP core for 5.3.2+ but not yet released. Requires patch to PHP 5.2.


It is possible to turn off unneeded modules in Apache to reduce memory footprint. The modules you require depends very much on your setup.
The traditional way of controlling modules in Apache has been through the LoadModule directive in httpd.conf. Ubuntu and Debian do it differently with the /etc.apache2/mods-available directory and the a2enmod command. To list all modules to enable try:

$ sudo a2enmod

$ sudo /etc/init.d/apache2 force-reload
And to see what you have enabled you can do $ sudo a2dismod.
All sites: Good savings in RAM
What Apache2 modules can be disabled?
Lists of modules which should be enabled in Apache2.


The use of MPM Worker allows for the handling of more requests due to multithreading in each process. It has a smaller memory footprint than Prefork and is faster. According to docs, Apache must be compiled with the --with-mpm argument in order to install Worker as “prefork” is the default on Unix systems.
RAM limited: Worker preferable to Prefork.
Compile-Time Configuration Issues
“Choosing an MPM” section covers differences between the two models.
Multi-Processing Modules (MPMs)
Apache documentation on installation.
Installing Apache2 and PHP5 using mod_fcgid
Hey, you don’t have to recompile Apache. Tutorial on how to install MPM Worker using apt-get with Apache2 on Ubuntu. Just the ticket.
mpm-worker versus mpm-prefork, and mod_php versus fastcgi
PreFork and FastCGI is still a win if you find that Worker is unstable due to long downloads as this person did.


The use of mod_php with Apache is the most common setup for calling PHP. mod_php works by embedding PHP into every Apache process. This has the disadvantage of a large memory footprint for each Apache process. FastCGI and mod_fcgid overcomes this problem and reduces resource utilization with no gains in performance.
  • All PHP loaded into the process
  • Heavy process even if flat file
  • Many processes will hog RAM
Use mod_fcgid for lower memory and DB/Network connections
Drupal webserver configurations compared
The most common, Apache+mod_php is the slowest. Tests conducted with FastCGI which is faster. NB: FastCGI has subsequently suffered from stability issues.
Apache with fcgid: acceptable performance and better resource utilization
Informative article which comes out in favor of mod_fcgid over FastCGI and mod_php. This is the must read article if you wish to attempt fcgid.
Configure Apache for high performance on drupal 6
Some solid comments from kbahey from 2bits regarding stable setup: Apache, MPM Worker, fcgid, APC (code cache), memcache No SQL.


The MaxClients parameter controls how many simultaneous clients Apache is able to serve. If it is set to high RAM will be chewed up and the Machine will go into swap. If it is set to low then your site will be unnecessarily limited by the number of clients it can serve. The setting of this value should be determined after consideration of (i) how much spare RAM is available on the server and (ii) how much RAM each Apache process consumes. Obviously you will want to maximize available RAM through frugal allocation of RAM to MySQL, JVM, etc and minimize the size of Apache process through techniques described above.
2bits provide the following formula:
MaxClients = (Total Memory - Operating System Memory - MySQL memory) / Size Per Apache process.
The only addition this guide would make is that it is important to leave some RAM free for the OS file buffer to allow efficient operation of the OS.
Tuning the Apache MaxClients parameter
How to set MaxClients param.


If you are running Apache then it is possible to either use .htaccess or the apache conf file to specify directives such as rewrite rules, etc. If you use .htaccess then Apache must look for .htaccess rules in the directory hierarchy for every request. This can take some time even if no rules are found. You may consider putting the rules in httpd.conf/apache2.conf if you are looking to eek out the most performance from your site.
.htaccess can slow down site if performance is crucial.
.htaccess vs httpd.conf
Evidence that .htaccess can slow a site by 6.6%.

RAM: A precious resource

Given the above, serious thought should be given to how the RAM on your box is to be divided up. In a nutshell we have the following apps contesting for their fair share:
  • The JVM if you are running Solr
  • MySQL query cache and key buffers
  • Apache processes for client requests
  • PHP if it runs outside Apache
  • Memcached for holding Drupal caches
  • The file system cache
Consider the following when deciding how to divide up your box:
  • The JVM needs a certain amount or else Solr will crash.
  • MySQL really should have indexes buffered for MyISAM and InnoDB. Use MySQLTuner. If they can’t fit then buy more RAM or (i) reduce max clients and (ii) forget about CacheRouter.
  • Apache MaxClients should be set to consume available RAM.
  • The file system cache needs to be big enough to allow smooth running of system.

This article forms part of a series on Drupal performance and scalability. The first article in the series is Squeezing the last drop from Drupal: Performance and Scalability.