Friday, September 10, 2010

Configuring Commonly Used IP ACLs

SkyHi @ Friday, September 10, 2010

Document ID: 26448

Downloads Configuring Commonly Used IP ACLs



Components Used
         Configuration Examples  
Allow a Select Host to Access the Network

Deny a Select Host to Access the Network

Allow Access to a Range of Contiguous IP Addresses

 Deny Telnet Traffic (TCP, Port 23)

Allow Only Internal Networks to Initiate a TCP Session

 Deny FTP Traffic (TCP, Port 21)

Allow FTP Traffic (Active FTP)

 Allow FTP Traffic (Passive FTP)

Allow Pings (ICMP)

 Allow HTTP, Telnet, Mail, POP3, FTP

 Allow DNS

 Permit Routing Updates

 Debug Traffic Based on ACL

 MAC Address Filtering



Cisco Support Community - Featured Conversations

Related Information


This document provides sample configurations for commonly used IP
Access Control Lists (ACLs), which filter IP packets based on:

  • Source address
  • Destination address
  • Type of packet
  • Any combination of these items
In order to filter network traffic, ACLs control whether routed packets
are forwarded or blocked at the router interface. Your router examines each
packet to determine whether to forward or drop the packet based on the criteria
that you specify within the ACL. ACL criteria include:

  • Source address of the traffic
  • Destination address of the traffic
  • Upper-layer protocol

Complete these steps to construct an ACL as the examples in this
document show:

  1. Create an ACL.
  2. Apply the ACL to an interface.
The IP ACL is a sequential collection of permit and deny conditions
that applies to an IP packet. The router tests packets against the conditions
in the ACL one at a time.

The first match determines whether the Cisco IOS® Software accepts or
rejects the packet. Because the Cisco IOS Software stops testing conditions
after the first match, the order of the conditions is critical. If no
conditions match, the router rejects the packet because of an implicit deny all

These are examples of IP ACLs that can be configured in Cisco IOS

  • Standard ACLs
  • Extended ACLs
  • Dynamic (lock and key) ACLs
  • IP-named ACLs
  • Reflexive ACLs
  • Time-based ACLs that use time ranges
  • Commented IP ACL entries
  • Context-based ACLs
  • Authentication proxy
  • Turbo ACLs
  • Distributed time-based ACLs
This document discusses some commonly used standard and extended ACLs.
Refer to
IP Access Lists
for more information on different types of ACLs
supported in Cisco IOS Software and how to configure and edit ACLs.

The command syntax format of a standard ACL is access-list
access-list-number {permit|deny} {host|source

Standard ACLs
registered customers only
control traffic by comparing the
source address of the IP packets to the addresses configured in the ACL.

Extended ACLs
registered customers only
control traffic by comparing the
source and destination addresses of the IP packets to the addresses configured
in the ACL. You can also make extended ACLs more granular and configured to
filter traffic by criteria such as:

  • Protocol
  • Port numbers
  • Differentiated services code point (DSCP) value
  • Precedence value
  • State of the synchronize sequence number (SYN)
The command syntax formats of extended ACLs are:

access-list access-list-number [dynamic dynamic-name [timeout minutes]]
{deny | permit} protocol source source-wildcard destination 
[precedence precedence] [tos tos] [log | log-input] 
[time-range time-range-name][fragments]

Internet Control Message Protocol (ICMP)
access-list access-list-number [dynamic dynamic-name [timeout minutes]] 
{deny | permit}
icmp source source-wildcard destination destination-wildcard [icmp-type
[icmp-code] | [icmp-message]] [precedenceprecedence] [tos tos] [log |
log-input] [time-range time-range-name][fragments]

Transport Control Protocol (TCP)
access-list access-list-number [dynamic dynamic-name [timeout minutes]] 
{deny | permit} tcp
source source-wildcard [operator [port]] destination destination-wildcard
[operator [port]] [established] [precedence precedence] [tos tos] [log |
log-input] [time-range time-range-name][fragments]

User Datagram Protocol (UDP)
access-list access-list-number [dynamic dynamic-name [timeout minutes]] 
{deny | permit} udp
source source-wildcard [operator [port]] destination destination-wildcard
[operator [port]] [precedence precedence] [tos tos] [log | log-input]
[time-range time-range-name][fragments]

Refer to
IP Services Commands for the command reference for an ACL.



Ensure that you meet this requirement before you attempt this

  • Basic understanding of IP

Refer to
IP Addressing and Subnetting for New Users for additional

Components Used

This document is not restricted to specific software and hardware


Refer to
Technical Tips Conventions
for more information on document

Configuration Examples

These configuration examples use the most common IP ACLs.

Note: Use the
Lookup Tool

registered customers only
to find more information on the commands used in this

Allow a Select Host to Access the Network

This figure shows a select host being granted permission to access the
network. All traffic sourced from Host B destined to NetA is permitted, and all
other traffic sourced from NetB destined to NetA is


The output on the R1 table shows how the network grants access to the
host. This output shows that:

  • The configuration allows only the host with the IP address through the Ethernet 0 interface on R1.

  • This host has access to the IP services of NetA.

  • No other host in NetB has access to NetA.

  • No deny statement is configured in the

By default, there is an implicit deny all clause at the end of every
ACL. Anything that is not explicitly permitted is denied.


hostname R1
interface ethernet0
ip access-group 1 in
access-list 1 permit host

Note: The ACL filters IP packets from NetB to NetA, except packets sourced
from NetB. Packets destined to Host B from NetA are still permitted.

Note: The ACL access-list 1 permit
is another way to configure the same rule.

Deny a Select Host to Access the Network

This figure shows that traffic sourced from Host B destined to NetA is
denied, while all other traffic from the NetB to access NetA is


This configuration denies all packets from host through
Ethernet 0 on R1 and permits everything else. You must use the command
access list 1 permit any to explicitly permit
everything else because there is an implicit deny all clause with every


hostname R1
interface ethernet0
ip access-group 1 in
access-list 1 deny host
access-list 1 permit any

Note: The order of statements is critical to the operation of an ACL. If
the order of the entries is reversed as this command shows, the first line
matches every packet source address. Therefore, the ACL fails to block host from accessing NetA.

access-list 1 permit any
access-list 1 deny host

Allow Access to a Range of Contiguous IP Addresses

This figure shows that all hosts in NetB with the network address can access network in


This configuration allows the IP packets with an IP header that has a
source address in the network and a destination address in the
network access to NetA. There is the implicit deny all clause
at the end of the ACL which denies all other traffic passage through Ethernet 0
inbound on R1.


hostname R1
interface ethernet0
ip access-group 101 in
access-list 101 permit ip

Note: In the command access-list 101 permit ip
, the "" is the inverse
mask of network with mask ACLs use the inverse mask
to know how many bits in the network address need to match. In the table, the
ACL permits all hosts with source addresses in the network and
destination addresses in the network.

Refer to the
Masks section of Configuring IP Access Lists for more information on the mask of a network address
and how to calculate the inverse mask needed for ACLs.

Deny Telnet Traffic (TCP, Port 23)

In order to meet higher security concerns, you might have to disable
Telnet access to your private network from the public network. This figure
shows how Telnet traffic from NetB (public) destined to NetA (private) is
denied, which permits NetA to initiate and establish a Telnet session with NetB
while all other IP traffic is permitted.


Telnet uses TCP, port 23. This configuration shows that all TCP traffic
destined to NetA for port 23 is blocked, and all other IP traffic is


hostname R1
interface ethernet0
ip access-group 102 in
access-list 102 deny tcp any any eq 23
access-list 102 permit ip any any

Allow Only Internal Networks to Initiate a TCP Session

This figure shows that TCP traffic sourced from NetA destined to NetB
is permitted, while TCP traffic from NetB destined to NetA is


The purpose of the ACL in this example is to:

  • Allow hosts in NetA to initiate and establish a TCP session to hosts
    in NetB.

  • Deny hosts in NetB from initiating and establishing a TCP session
    destined to hosts in NetA.

This configuration allows a datagram to pass through interface Ethernet
0 inbound on R1 when the datagram has:

  • Acknowledged (ACK) or reset (RST) bits set (indicating an established
    TCP session)

  • A destination port value greater than


hostname R1
interface ethernet0
ip access-group 102 in
access-list 102 permit tcp any any gt 1023 established

Since most of the well-known ports for IP services use values less than
1023, any datagram with a destination port less than 1023 or an ACK/RST bit not
set is denied by ACL 102. Therefore, when a host from NetB initiates a TCP
connection by sending the first TCP packet (without synchronize/start packet
(SYN/RST) bit set) for a port number less than 1023, it is denied and the TCP
session fails. The TCP sessions initiated from NetA destined to NetB are
permitted because they have ACK/RST bit set for returning packets and use port
values greater than 1023.

Refer to
1700 for a complete list of ports.

Deny FTP Traffic (TCP, Port 21)

This figure shows that FTP (TCP, port 21) and FTP data (port 20 )
traffic sourced from NetB destined to NetA is denied, while all other IP
traffic is permitted.


FTP uses port 21 and port 20. TCP traffic destined to port 21 and port
20 is denied and everything else is explicitly permitted.


hostname R1
interface ethernet0
ip access-group 102 in
access-list 102 deny tcp any any eq ftp
access-list 102 deny tcp any any eq ftp-data
access-list 102 permit ip any any

Allow FTP Traffic (Active FTP)

FTP can operate in two different modes named active and passive. Refer
to understand how active and passive FTP works.

When FTP operates in active mode, the FTP server uses port 21 for
control and port 20 for data. FTP server ( is located in NetA.
This figure shows that FTP (TCP, port 21) and FTP data (port 20 ) traffic
sourced from NetB destined to FTP server ( is permitted, while
all other IP traffic is denied.



hostname R1
interface ethernet0
ip access-group 102 in
access-list 102 permit tcp any host eq ftp
access-list 102 permit tcp any host eq ftp-data established
interface ethernet1
ip access-group 110 in
access-list 110 permit host eq ftp any established
access-list 110 permit host eq ftp-data any

Allow FTP Traffic (Passive FTP)

FTP can operate in two different modes named active and passive. Refer
in order to understand how active and passive FTP

When FTP operates in passive mode, the FTP server uses port 21 for
control and the dynamic ports greater than or equal to 1024 for data. FTP
server ( is located in NetA. This figure shows that FTP (TCP,
port 21) and FTP data (ports greater than or equal to 1024) traffic sourced
from NetB destined to FTP server ( is permitted, while all other
IP traffic is denied.



hostname R1
interface ethernet0
ip access-group 102 in
access-list 102 permit tcp any host eq ftp
access-list 102 permit tcp any host gt 1024
interface ethernet1
ip access-group 110 in
access-list 110 permit host eq ftp any established
access-list 110 permit host gt 1024 any established

Allow Pings (ICMP)

This figure shows that ICMP sourced from NetA destined to NetB is
permitted, and pings sourced from NetB destined to NetA are


This configuration permits only echo-reply (ping response) packets to
come in on interface Ethernet 0 from NetB towards NetA. However, the
configuration blocks all echo-request ICMP packets when pings are sourced in
NetB and destined to NetA. Therefore, hosts in NetA can ping hosts in NetB, but
hosts in NetB cannot ping hosts in NetA.


hostname R1
interface ethernet0
ip access-group 102 in
access-list 102 permit icmp any any echo-reply

Allow HTTP, Telnet, Mail, POP3, FTP

This figure shows that only HTTP, Telnet, Simple Mail Transfer Protocol
(SMTP), POP3, and FTP traffic are permitted, and the rest of the traffic
sourced from NetB destined to NetA is denied.


This configuration permits TCP traffic with destination port values
that match WWW (port 80), Telnet (port 23), SMTP (port 25), POP3 (port 110),
FTP (port 21), or FTP data (port 20). Notice an implicit deny all clause at the
end of an ACL denies all other traffic, which does not match the permit


hostname R1
interface ethernet0
ip access-group 102 in
access-list 102 permit tcp any any eq www
access-list 102 permit tcp any any eq telnet
access-list 102 permit tcp any any eq smtp
access-list 102 permit tcp any any eq pop3
access-list 102 permit tcp any any eq 21
access-list 102 permit tcp any any eq 20

Allow DNS

This figure shows that only Domain Name System (DNS) traffic is
permitted, and the rest of the traffic sourced from NetB destined to NetA is


This configuration permits TCP traffic with destination port value 53.
The implicit deny all clause at the end of an ACL denies all other traffic,
which does not match the permit clauses.


hostname R1
interface ethernet0
ip access-group 102 in
access-list 112 permit udp any any eq domain 
access-list 112 permit udp any eq domain any
access-list 112 permit tcp any any eq domain 
access-list 112 permit tcp any eq domain any

Permit Routing Updates

When you apply an in-bound ACL on to an interface, ensure that routing
updates are not filtered out. Use the relevant ACL from this list to permit
routing protocol packets:

Issue this command to permit Routing Information Protocol (RIP):
access-list 102 permit udp any any eq rip

Issue this command to permit Interior Gateway Routing Protocol

access-list 102 permit igrp any any

Issue this command to permit Enhanced IGRP (EIGRP):

access-list 102 permit eigrp any any

Issue this command to permit Open Shortest Path First (OSPF):

access-list 102 permit ospf any any

Issue this command to permit Border Gateway Protocol (BGP):

access-list 102 permit tcp any any eq 179 
access-list 102 permit tcp any eq 179 any

Debug Traffic Based on ACL

The use of debug commands requires the
allocation of system resources like memory and processing power and in extreme
situations can cause a heavily-loaded system to stall. Use
debug commands with care. Use an ACL in order to
selectively define the traffic that needs to be examined to reduce the impact
of thedebug command. Such a configuration does not
filter any packets.

This configuration turns on the debug ip
command only for packets between the hosts and

R1(config)#<b>access-list 199 permit tcp host host</b>
R1(config)#<b>access-list 199 permit tcp host host</b>
R1#<b>debug ip packet 199 detail</b>
IP packet debugging is on (detailed) for access list 199

Refer to
Information on Debug Commands
for additional information on the impact
of debug commands.

Refer to the
the Debug Command
section of
the Ping and Traceroute Commands
for additional information on the use
of ACLs with debug commands.

MAC Address Filtering

You can filter frames with a particular MAC-layer station source or
destination address. Any number of addresses can be configured into the system
without a performance penalty. In order to filter by MAC-layer address, use
this command in global configuration mode:

Router#config terminal
bridge irb
bridge 1 protocol ieee
bridge 1 route ip
Apply the bridge protocol to an interface that you need to filter
traffic along with the access list created:

Router#int fa0/0
no ip address
bridge-group 1 {input-address-list 700 | output-address-list 700}
Create a Bridged Virtual Interface and apply the IP address that is
assigned to the Ethernet interface:

Router#int bvi1
ip address 
access-list 700 deny <mac address> 0000.0000.0000
access-list 700 permit 0000.0000.0000 ffff.ffff.ffff

With this configuration, the router only allows the mac addresses
configured on the access-list 700. With the access list, deny the MAC adddress
that can not have access and then permit the rest.

Note: Create every line of access list for each MAC address.


There is currently no verification procedure available for this


There is currently no specific troubleshooting information available
for this configuration.


CCNA: The Explicit Deny All

SkyHi @ Friday, September 10, 2010

One of the key facts regarding Access Control Lists (ACLs) that we drill into your head during CCNA is the fact that the lists you create end with what is called the “implicit” deny all. You do not see it, but the effect is undeniable. Any packets that do not match any of the permit statements in your list get deny treatment. In the case of our filtering access lists, this means the packets are dropped. As you recall from the course, this is why we desperately require at least one permit entry in all of our filtering access control lists.

But what if we want to track what we actually drop as a result of this powerful implicit deny all effect? Well, a clever trick is to end the list with an explicit deny statement and log the result. In this post, we will examine this technique.

Let’s create a named, standard ACL that permit packets sourced from the 10.x.x.x address space.

ip access-list standard AL_PERMIT_10

Now I will apply this ACL inbound to a router interface and generate some traffic that matches this statement. When we run the command show access-lists, we can see from the “hit counter” that the permit has caught some matches. But what about packets that have failed?

R1(config-std-nacl)#int fa0/0
R1(config-if)#ip access-group AL_PERMIT_10 in
R1#show access-lists
Standard IP access list AL_PERMIT_10
10 permit, wildcard bits <strong>(57 matches)</strong>

In order to be alerted about packets that hit the implicit deny all, we need to create an explicit one. If we are really concerned about packets that do not match any permits, we can add the log option so we can be alerted at the command line, in addition to being notified when we do the show access-list command.

R1(config)#ip access-list standard AL_PERMIT_10
R1(config-std-nacl)#deny any log

After this configuration change, watch what happens when someone from tries to ping into R1 through our interface:

<strong>*Mar  1 00:12:23.251: %SEC-6-IPACCESSLOGNP: list AL_PERMIT_10 denied 0 ->, 1 packet</strong>
R1#show access-lists
Standard IP access list AL_PERMIT_10
10 permit, wildcard bits (117 matches)
20 deny   any log <strong>(5 matches)</strong>

One question I often get from students at this point is: why did we only get one notification (of one packet) via the command line system message, but there were in fact 5 packets that were blocked (as depicted in the show command output)?

The answer is that the IOS is being smart here and it will batch up the log messages so that the system is not overwhelmed in trying to show us all of these matches in real time.

I hope you are enjoying CCNA studies here at INE!


Top 20 Nginx WebServer Best Security Practices

SkyHi @ Friday, September 10, 2010
Nginx is a lightweight, high performance web server/reverse proxy and e-mail (IMAP/POP3) proxy. It runs on UNIX, GNU/Linux, BSD variants, Mac OS X, Solaris, and Microsoft Windows. According to Netcraft, 6% of all domains on the Internet use nginx webserver. Nginx is one of a handful of servers written to address the C10K problem. Unlike traditional servers, Nginx doesn't rely on threads to handle requests. Instead it uses a much more scalable event-driven (asynchronous) architecture. Nginx powers several high traffic web sites, such as WordPress, Hulu, Github, and SourceForge. This page collects hints how to improve the security of nginx web servers running on Linux or UNIX like operating systems.
Default Config Files and Nginx Port

* /usr/local/nginx/conf/ - The nginx server configuration directory and /usr/local/nginx/conf/nginx.conf is main configuration file.
* /usr/local/nginx/html/ - The default document location.
* /usr/local/nginx/logs/ - The default log file location.
* Nginx HTTP default port : TCP 80
* Nginx HTTPS default port : TCP 443

You can test nginx configuration changes as follows:
# /usr/local/nginx/sbin/nginx -t
Sample outputs:

the configuration file /usr/local/nginx/conf/nginx.conf syntax is ok
configuration file /usr/local/nginx/conf/nginx.conf test is successful

To load config changes, type:
# /usr/local/nginx/sbin/nginx -s reload
To stop server, type:
# /usr/local/nginx/sbin/nginx -s stop
#1: Turn On SELinux

Security-Enhanced Linux (SELinux) is a Linux kernel feature that provides a mechanism for supporting access control security policies which provides great protection. It can stop many attacks before your system rooted. See how to turn on SELinux for CentOS / RHEL based systems.
Do Boolean Lockdown

Run the getsebool -a command and lockdown system:

getsebool -a | less
getsebool -a | grep off
getsebool -a | grep o

To secure the machine, look at settings which are set to 'on' and change to 'off' if they do not apply to your setup with the help of setsebool command. Set correct SE Linux booleans to maintain functionality and protection. Please note that SELinux adds 2-8% overheads to typical RHEL or CentOS installation.
#2: Allow Minimal Privileges Via Mount Options

Server all your webpages / html / php files via separate partitions. For example, create a partition called /dev/sda5 and mount at the /nginx. Make sure /nginx is mounted with noexec, nodev and nosetuid permissions. Here is my /etc/fstab entry for mounting /nginx:

LABEL=/nginx /nginx ext3 defaults,nosuid,noexec,nodev 1 2

Note you need to create a new partition using fdisk and mkfs.ext3 commands.
#3: Linux /etc/sysctl.conf Hardening

You can control and configure Linux kernel and networking settings via /etc/sysctl.conf.

# Avoid a smurf attack
net.ipv4.icmp_echo_ignore_broadcasts = 1

# Turn on protection for bad icmp error messages
net.ipv4.icmp_ignore_bogus_error_responses = 1

# Turn on syncookies for SYN flood attack protection
net.ipv4.tcp_syncookies = 1

# Turn on and log spoofed, source routed, and redirect packets
net.ipv4.conf.all.log_martians = 1
net.ipv4.conf.default.log_martians = 1

# No source routed packets here
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.accept_source_route = 0

# Turn on reverse path filtering
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.rp_filter = 1

# Make sure no one can alter the routing tables
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0
net.ipv4.conf.all.secure_redirects = 0
net.ipv4.conf.default.secure_redirects = 0

# Don't act as a router
net.ipv4.ip_forward = 0
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0

# Turn on execshild
kernel.exec-shield = 1
kernel.randomize_va_space = 1

# Tuen IPv6
net.ipv6.conf.default.router_solicitations = 0
net.ipv6.conf.default.accept_ra_rtr_pref = 0
net.ipv6.conf.default.accept_ra_pinfo = 0
net.ipv6.conf.default.accept_ra_defrtr = 0
net.ipv6.conf.default.autoconf = 0
net.ipv6.conf.default.dad_transmits = 0
net.ipv6.conf.default.max_addresses = 1

# Optimization for port usefor LBs
# Increase system file descriptor limit
fs.file-max = 65535

# Allow for more PIDs (to reduce rollover problems); may break some programs 32768
kernel.pid_max = 65536

# Increase system IP port limits
net.ipv4.ip_local_port_range = 2000 65000

# Increase TCP max buffer size setable using setsockopt()
net.ipv4.tcp_rmem = 4096 87380 8388608
net.ipv4.tcp_wmem = 4096 87380 8388608

# Increase Linux auto tuning TCP buffer limits
# min, default, and max number of bytes to use
# set max to at least 4MB, or higher if you use very high BDP paths
# Tcp Windows etc
net.core.rmem_max = 8388608
net.core.wmem_max = 8388608
net.core.netdev_max_backlog = 5000
net.ipv4.tcp_window_scaling = 1

See also:

* Linux Tuning The VM (memory) Subsystem
* Linux Tune Network Stack (Buffers Size) To Increase Networking Performance

#4: Remove All Unwanted Nginx Modules

You need to minimizes the number of modules that are compiled directly into the nginx binary. This minimizes risk by limiting the capabilities allowed by the webserver. You can configure and install nginx using only required modules. For example, disable SSI and autoindex module you can type:
# ./configure --without-http_autoindex_module --without-http_ssi_module
# make
# make install
Type the following command to see which modules can be turn on or off while compiling nginx server:
# ./configure --help | less
Disable nginx modules that you don't need.
(Optional) Change Nginx Version Header

Edit src/http/ngx_http_header_filter_module.c, enter:
# vi +48 src/http/ngx_http_header_filter_module.c
Find line

static char ngx_http_server_string[] = "Server: nginx" CRLF;
static char ngx_http_server_full_string[] = "Server: " NGINX_VER CRLF;

Change them as follows:

static char ngx_http_server_string[] = "Server: Ninja Web Server" CRLF;
static char ngx_http_server_full_string[] = "Server: Ninja Web Server" CRLF;

Save and close the file. Now, you can compile the server. Add the following in nginx.conf to turn off nginx version number displayed on all auto generated error pages:

server_tokens off

#5: Use mod_security (only for backend Apache servers)

mod_security provides an application level firewall for Apache. Install mod_security for all backend Apache web servers. This will stop many injection attacks.
#6: Install SELinux Policy To Harden The Nginx Webserver

By default SELinux will not protect the nginx web server. However, you can install and compile protection as follows. First, install required SELinux compile time support:
# yum -y install selinux-policy-targeted selinux-policy-devel
Download targeted SELinux policies to harden the nginx webserver on Linux servers from the project home page:
# cd /opt
# wget ''
Untar the same:
# tar -zxvf se-ngix_1_0_10.tar.gz
Compile the same
# cd se-ngix_1_0_10/nginx
# make
Sample outputs:

Compiling targeted nginx module
/usr/bin/checkmodule: loading policy configuration from tmp/nginx.tmp
/usr/bin/checkmodule: policy configuration loaded
/usr/bin/checkmodule: writing binary representation (version 6) to tmp/nginx.mod
Creating targeted nginx.pp policy package
rm tmp/nginx.mod.fc tmp/nginx.mod

Install the resulting nginx.pp SELinux module:
# /usr/sbin/semodule -i nginx.pp
#7: Restrictive Iptables Based Firewall

The following firewall script blocks everything and only allows:

* Incoming HTTP (TCP port 80) requests
* Incoming ICMP ping requests
* Outgoing ntp (port 123) requests
* Outgoing smtp (TCP port 25) requests


#### IPS ######
# Get server public ip
SERVER_IP=$(ifconfig eth0 | grep 'inet addr:' | awk -F'inet addr:' '{ print $2}' | awk '{ print $1}')

# Do some smart logic so that we can use damm script on LB2 too
[[ "$SERVER_IP" == "$LB1_IP" ]] && OTHER_LB="$LB2_IP" || OTHER_LB="$LB1_IP"
[[ "$OTHER_LB" == "$LB2_IP" ]] && OPP_LB="$LB1_IP" || OPP_LB="$LB2_IP"

### IPs ###

#### FILES #####
BADIPS=$( [[ -f ${BLOCKED_IP_TDB} ]] && egrep -v "^#|^$" ${BLOCKED_IP_TDB})

### Interfaces ###
PUB_IF="eth0" # public interface
LO_IF="lo" # loopback
VPN_IF="eth1" # vpn / private net

### start firewall ###
echo "Setting LB1 $(hostname) Firewall..."

# DROP and close everything

# Unlimited lo access

# Unlimited vpn / pnet access

# Drop sync
$IPT -A INPUT -i ${PUB_IF} -p tcp ! --syn -m state --state NEW -j DROP

# Drop Fragments
$IPT -A INPUT -i ${PUB_IF} -f -j DROP

$IPT -A INPUT -i ${PUB_IF} -p tcp --tcp-flags ALL FIN,URG,PSH -j DROP
$IPT -A INPUT -i ${PUB_IF} -p tcp --tcp-flags ALL ALL -j DROP

# Drop NULL packets
$IPT -A INPUT -i ${PUB_IF} -p tcp --tcp-flags ALL NONE -m limit --limit 5/m --limit-burst 7 -j LOG --log-prefix " NULL Packets "
$IPT -A INPUT -i ${PUB_IF} -p tcp --tcp-flags ALL NONE -j DROP

$IPT -A INPUT -i ${PUB_IF} -p tcp --tcp-flags SYN,RST SYN,RST -j DROP

# Drop XMAS
$IPT -A INPUT -i ${PUB_IF} -p tcp --tcp-flags SYN,FIN SYN,FIN -m limit --limit 5/m --limit-burst 7 -j LOG --log-prefix " XMAS Packets "
$IPT -A INPUT -i ${PUB_IF} -p tcp --tcp-flags SYN,FIN SYN,FIN -j DROP

# Drop FIN packet scans
$IPT -A INPUT -i ${PUB_IF} -p tcp --tcp-flags FIN,ACK FIN -m limit --limit 5/m --limit-burst 7 -j LOG --log-prefix " Fin Packets Scan "
$IPT -A INPUT -i ${PUB_IF} -p tcp --tcp-flags FIN,ACK FIN -j DROP

$IPT -A INPUT -i ${PUB_IF} -p tcp --tcp-flags ALL SYN,RST,ACK,FIN,URG -j DROP

# Log and get rid of broadcast / multicast and invalid
$IPT -A INPUT -i ${PUB_IF} -m pkttype --pkt-type broadcast -j LOG --log-prefix " Broadcast "
$IPT -A INPUT -i ${PUB_IF} -m pkttype --pkt-type broadcast -j DROP

$IPT -A INPUT -i ${PUB_IF} -m pkttype --pkt-type multicast -j LOG --log-prefix " Multicast "
$IPT -A INPUT -i ${PUB_IF} -m pkttype --pkt-type multicast -j DROP

$IPT -A INPUT -i ${PUB_IF} -m state --state INVALID -j LOG --log-prefix " Invalid "
$IPT -A INPUT -i ${PUB_IF} -m state --state INVALID -j DROP

# Log and block spoofed ips
$IPT -N spooflist
for ipblock in $SPOOFIP
$IPT -A spooflist -i ${PUB_IF} -s $ipblock -j LOG --log-prefix " SPOOF List Block "
$IPT -A spooflist -i ${PUB_IF} -s $ipblock -j DROP
$IPT -I INPUT -j spooflist
$IPT -I OUTPUT -j spooflist
$IPT -I FORWARD -j spooflist

# Allow ssh only from selected public ips
for ip in ${PUB_SSH_ONLY}
$IPT -A INPUT -i ${PUB_IF} -s ${ip} -p tcp -d ${SERVER_IP} --destination-port 22 -j ACCEPT
$IPT -A OUTPUT -o ${PUB_IF} -d ${ip} -p tcp -s ${SERVER_IP} --sport 22 -j ACCEPT

# allow incoming ICMP ping pong stuff
$IPT -A INPUT -i ${PUB_IF} -p icmp --icmp-type 8 -s 0/0 -m state --state NEW,ESTABLISHED,RELATED -m limit --limit 30/sec -j ACCEPT
$IPT -A OUTPUT -o ${PUB_IF} -p icmp --icmp-type 0 -d 0/0 -m state --state ESTABLISHED,RELATED -j ACCEPT

# allow incoming HTTP port 80
$IPT -A INPUT -i ${PUB_IF} -p tcp -s 0/0 --sport 1024:65535 --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
$IPT -A OUTPUT -o ${PUB_IF} -p tcp --sport 80 -d 0/0 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPT

# allow outgoing ntp
$IPT -A OUTPUT -o ${PUB_IF} -p udp --dport 123 -m state --state NEW,ESTABLISHED -j ACCEPT
$IPT -A INPUT -i ${PUB_IF} -p udp --sport 123 -m state --state ESTABLISHED -j ACCEPT

# allow outgoing smtp
$IPT -A OUTPUT -o ${PUB_IF} -p tcp --dport 25 -m state --state NEW,ESTABLISHED -j ACCEPT
$IPT -A INPUT -i ${PUB_IF} -p tcp --sport 25 -m state --state ESTABLISHED -j ACCEPT

### add your other rules here ####

# drop and log everything else
$IPT -A INPUT -m limit --limit 5/m --limit-burst 7 -j LOG --log-prefix " DEFAULT DROP "

exit 0

#8: Controlling Buffer Overflow Attacks

Edit nginx.conf and set the buffer size limitations for all clients.
# vi /usr/local/nginx/conf/nginx.conf
Edit and set the buffer size limitations for all clients as follows:

## Start: Size Limits & Buffer Overflows ##
client_body_buffer_size 1K;
client_header_buffer_size 1k;
client_max_body_size 1k;
large_client_header_buffers 2 1k;
## END: Size Limits & Buffer Overflows ##


1. client_body_buffer_size 1k - (default is 8k or 16k) The directive specifies the client request body buffer size.
2. client_header_buffer_size 1k - Directive sets the headerbuffer size for the request header from client. For the overwhelming majority of requests a buffer size of 1K is sufficient. Increase this if you have a custom header or a large cookie sent from the client (e.g., wap client).
3. client_max_body_size 1k- Directive assigns the maximum accepted body size of client request, indicated by the line Content-Length in the header of request. If size is greater the given one, then the client gets the error "Request Entity Too Large" (413). Increase this when you are getting file uploads via the POST method.
4. large_client_header_buffers 2 1k - Directive assigns the maximum number and size of buffers for large headers to read from client request. By default the size of one buffer is equal to the size of page, depending on platform this either 4K or 8K, if at the end of working request connection converts to state keep-alive, then these buffers are freed. 2x1k will accept 2kB data URI. This will also help combat bad bots and DoS attacks.

You also need to control timeouts to improve server performance and cut clients. Edit it as follows:

## Start: Timeouts ##
client_body_timeout 10;
client_header_timeout 10;
keepalive_timeout 5 5;
send_timeout 10;
## End: Timeouts ##

1. client_body_timeout 10; - Directive sets the read timeout for the request body from client. The timeout is set only if a body is not get in one readstep. If after this time the client send nothing, nginx returns error "Request time out" (408). The default is 60.
2. client_header_timeout 10; - Directive assigns timeout with reading of the title of the request of client. The timeout is set only if a header is not get in one readstep. If after this time the client send nothing, nginx returns error "Request time out" (408).
3. keepalive_timeout 5 5; - The first parameter assigns the timeout for keep-alive connections with the client. The server will close connections after this time. The optional second parameter assigns the time value in the header Keep-Alive: timeout=time of the response. This header can convince some browsers to close the connection, so that the server does not have to. Without this parameter, nginx does not send a Keep-Alive header (though this is not what makes a connection "keep-alive").
4. send_timeout 10; - Directive assigns response timeout to client. Timeout is established not on entire transfer of answer, but only between two operations of reading, if after this time client will take nothing, then nginx is shutting down the connection.

#9: Control Simultaneous Connections

You can use NginxHttpLimitZone module to limit the number of simultaneous connections for the assigned session or as a special case, from one IP address. Edit nginx.conf:

### Directive describes the zone, in which the session states are stored i.e. store in slimits. ###
### 1m can handle 32000 sessions with 32 bytes/session, set to 5m x 32000 session ###
limit_zone slimits $binary_remote_addr 5m;

### Control maximum number of simultaneous connections for one session i.e. ###
### restricts the amount of connections from a single ip address ###
limit_conn slimits 5;

The above will limits remote clients to no more than 5 concurrently "open" connections per remote ip address.
#10: Allow Access To Our Domain Only

If bot is just making random server scan for all domains, just deny it. You must only allow configured virtual domain or reverse proxy requests. You don't want to display request using an IP address:

## Only requests to our Host are allowed i.e., and
if ($host !~ ^(||$ ) {
return 444;

#11: Limit Available Methods

GET and POST are the most common methods on the Internet. Web server methods are defined in RFC 2616. If a web server does not require the implementation of all available methods, they should be disabled. The following will filter and only allow GET, HEAD and POST methods:

## Only allow these request methods ##
if ($request_method !~ ^(GET|HEAD|POST)$ ) {
return 444;
## Do not accept DELETE, SEARCH and other methods ##

More About HTTP Methods

* The GET method is used to request document such as
* The HEAD method is identical to GET except that the server MUST NOT return a message-body in the response.
* The POST method may involve anything, like storing or updating data, or ordering a product, or sending E-mail by submitting the form. This is usually processed using the server side scripting such as PHP, PERL, Python and so on. You must use this if you want to upload files and process forms on server.

#12: How Do I Deny Certain User-Agents?

You can easily block user-agents i.e. scanners, bots, and spammers who may be abusing your server.

## Block download agents ##
if ($http_user_agent ~* LWP::Simple|BBBike|wget) {
return 403;

Block robots called msnbot and scrapbot:

## Block some robots ##
if ($http_user_agent ~* msnbot|scrapbot) {
return 403;

#12: How Do I Block Referral Spam?

Referer spam is dengerouns. It can harm your SEO ranking via web-logs (if published) as referer field refer to their spammy site. You can block access to referer spammers with these lines.

## Deny certain Referers ###
if ( $http_referer ~* (babes|forsale|girl|jewelry|love|nudit|organic|poker|porn|sex|teen) )
# return 404;
return 403;

#13: How Do I Stop Image Hotlinking?

Image or HTML hotlinking means someone makes a link to your site to one of your images, but displays it on their own site. The end result you will end up paying for bandwidth bills and make the content look like part of the hijacker's site. This is usually done on forums and blogs. I strongly suggest you block and stop image hotlinking at your server level itself.

# Stop deep linking or hot linking
location /images/ {
valid_referers none blocked;
if ($invalid_referer) {
return 403;

Example: Rewrite And Display Image

Another example with link to banned image:

valid_referers blocked;
if ($invalid_referer) {
rewrite ^/images/uploads.*\.(gif|jpg|jpeg|png)$ last

See also:

* HowTo: Use nginx map to block image hotlinking. This is useful if you want to block tons of domains.

#14: Directory Restrictions

You can set access control for a specified directory. All web directories should be configured on a case-by-case basis, allowing access only where needed.
Limiting Access By Ip Address

You can limit access to directory by ip address to /docs/ directory:

location /docs/ {
## block one workstation

## allow anyone in

## drop rest of the world
deny all;

Password Protect The Directory

First create the password file and add a user called vivek:
# mkdir /usr/local/nginx/conf/.htpasswd/
# htpasswd -c /usr/local/nginx/conf/.htpasswd/passwd vivek
Edit nginx.conf and protect the required directories as follows:

### Password Protect /personal-images/ and /delta/ directories ###
location ~ /(personal-images/.*|delta/.*) {
auth_basic "Restricted";
auth_basic_user_file /usr/local/nginx/conf/.htpasswd/passwd;

Once a password file has been generated, subsequent users can be added with the following command:
# htpasswd -s /usr/local/nginx/conf/.htpasswd/passwd userName
#15: Nginx SSL Configuration

HTTP is a plain text protocol and it is open to passive monitoring. You should use SSL to to encrypt your content for users.
Create an SSL Certificate

Type the following commands:
# cd /usr/local/nginx/conf
# openssl genrsa -des3 -out server.key 1024
# openssl req -new -key server.key -out server.csr
# cp server.key
# openssl rsa -in -out server.key
# openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt
Edit nginx.conf and update it as follows:

server {
listen 443;
ssl on;
ssl_certificate /usr/local/nginx/conf/server.crt;
ssl_certificate_key /usr/local/nginx/conf/server.key;
access_log /usr/local/nginx/logs/ssl.access.log;
error_log /usr/local/nginx/logs/ssl.error.log;

Restart the nginx:
# /usr/local/nginx/sbin/nginx -s reload
See also:

* For more information, read the Nginx SSL documentation.

#16: Nginx And PHP Security Tips

PHP is one of the popular server side scripting language. Edit /etc/php.ini as follows:

# Disallow dangerous functions
disable_functions = phpinfo, system, mail, exec

## Try to limit resources ##

# Maximum execution time of each script, in seconds
max_execution_time = 30

# Maximum amount of time each script may spend parsing request data
max_input_time = 60

# Maximum amount of memory a script may consume (8MB)
memory_limit = 8M

# Maximum size of POST data that PHP will accept.
post_max_size = 8M

# Whether to allow HTTP file uploads.
file_uploads = Off

# Maximum allowed size for uploaded files.
upload_max_filesize = 2M

# Do not expose PHP error messages to external users
display_errors = Off

# Turn on safe mode
safe_mode = On

# Only allow access to executables in isolated directory
safe_mode_exec_dir = php-required-executables-path

# Limit external access to PHP environment
safe_mode_allowed_env_vars = PHP_

# Restrict PHP information leakage
expose_php = Off

# Log all errors
log_errors = On

# Do not register globals for input data
register_globals = Off

# Minimize allowable PHP post size
post_max_size = 1K

# Ensure PHP redirects appropriately
cgi.force_redirect = 0

# Disallow uploading unless necessary
file_uploads = Off

# Enable SQL safe mode
sql.safe_mode = On

# Avoid Opening remote files
allow_url_fopen = Off

See also:

* PHP Security: Limit Resources Used By Script
* PHP.INI settings: Disable exec, shell_exec, system, popen and Other Functions To Improve Security

#17: Run Nginx In A Chroot Jail (Containers) If Possible

Putting nginx in a chroot jail minimizes the damage done by a potential break-in by isolating the web server to a small section of the filesystem. You can use traditional chroot kind of setup with nginx. If possible use FreeBSD jails, XEN, or OpenVZ virtualization which uses the concept of containers.
#18: Limits Connections Per IP At The Firewall Level

A webserver must keep an eye on connections and limit connections per second. This is serving 101. Both pf and iptables can throttle end users before accessing your nginx server.
Linux Iptables: Throttle Nginx Connections Per Second

The following example will drop incoming connections if IP make more than 15 connection attempts to port 80 within 60 seconds:

/sbin/iptables -A INPUT -p tcp --dport 80 -i eth0 -m state --state NEW -m recent --set
/sbin/iptables -A INPUT -p tcp --dport 80 -i eth0 -m state --state NEW -m recent --update --seconds 60 --hitcount 15 -j DROP
service iptables save

BSD PF: Throttle Nginx Connections Per Second

Edit your /etc/pf.conf and update it as follows. The following will limits the maximum number of connections per source to 100. 15/5 specifies the number of connections per second or span of seconds i.e. rate limit the number of connections to 15 in a 5 second span. If anyone breaks our rules add them to our abusive_ips table and block them for making any further connections. Finally, flush keyword kills all states created by the matching rule which originate from the host which exceeds these limits.

table persist
block in quick from
pass in on $ext_if proto tcp to $webserver_ip port www flags S/SA keep state (max-src-conn 100, max-src-conn-rate 15/5, overload flush)

Please adjust all values as per your requirements and traffic (browsers may open multiple connections to your site). See also:

1. Sample PF firewall script.
2. Sample Iptables firewall script.

#19: Configure Operating System to Protect Web Server

Turn on SELinux as described above. Set correct permissions on /nginx document root. The nginx runs as a user named nginx. However, the files in the DocumentRoot (/nginx or /usr/local/nginx/html) should not be owned or writable by that user. To find files with wrong permissions, use:
# find /nginx -user nginx
# find /usr/local/nginx/html -user nginx
Make sure you change file ownership to root or other user. A typical set of permission /usr/local/nginx/html/
# ls -l /usr/local/nginx/html/
Sample outputs:

-rw-r--r-- 1 root root 925 Jan 3 00:50 error4xx.html
-rw-r--r-- 1 root root 52 Jan 3 10:00 error5xx.html
-rw-r--r-- 1 root root 134 Jan 3 00:52 index.html

You must delete unwated backup files created by vi or other text editor:
# find /nginx -name '.?*' -not -name .ht* -or -name '*~' -or -name '*.bak*' -or -name '*.old*'
# find /usr/local/nginx/html/ -name '.?*' -not -name .ht* -or -name '*~' -or -name '*.bak*' -or -name '*.old*'

Pass -delete option to find command and it will get rid of those files too.
#20: Restrict Outgoing Nginx Connections

The crackers will download file locally on your server using tools such as wget. Use iptables to block outgoing connections from nginx user. The ipt_owner module attempts to match various characteristics of the packet creator, for locally generated packets. It is only valid in the OUTPUT chain. In this example, allow vivek user to connect outside using port 80 (useful for RHN access or to grab CentOS updates via repos):

/sbin/iptables -A OUTPUT -o eth0 -m owner --uid-owner vivek -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT

Add above rule to your iptables based shell script. Do not allow nginx web server user to connect outside.
Bounce Tip: Watching Your Logs & Auditing

Check the Log files. They will give you some understanding of what attacks is thrown against the server and allow you to check if the necessary level of security is present or not.
# grep "/login.php??" /usr/local/nginx/logs/access_log
# grep "...etc/passwd" /usr/local/nginx/logs/access_log
# egrep -i "denied|error|warn" /usr/local/nginx/logs/error_log
The auditd service is provided for system auditing. Turn it on to audit service SELinux events, authetication events, file modifications, account modification and so on. As usual disable all services and follow our "Linux Server Hardening" security tips.

Your nginx server is now properly harden and ready to server webpages. However, you should be consulted further resources for your web applications security needs. For example, wordpress or any other third party apps has its own security requirements.

* HowTo: Setup nginx reverse proxy and HA cluser with the help of keepalived.
* nginx wiki - The official nginx wiki.
* OpenBSD specific Nginx installation and security how to.


HowTo: Migrate / Move MySQL Database And Users To New Server

SkyHi @ Friday, September 10, 2010
I already wrote about how to move or migrate user accounts from old Linux / UNIX server to a new server including mails and home directories. However, in reality you also need to move MySQL database which may host your blog, forum or just your data stored in MySQL database. The mysqldump command will only export the data and the table structure but it will not include a users grants and privileges. The main function of the MySQL privilege system (which is stored in mysql.user table) is to authenticate a user who connects from a given host and to associate that user with privileges on a database such as SELECT, INSERT, UPDATE, and DELETE.
Our Sample Setup

| db1 | -------------------------> -+
+-----+ |
old mysql server |
( |
+-----+ | ///////////////////////////////
| db2 | -------------------------> -+------> // Internet (ISP router //
+-----+ | // with port 80 forwarding) //
new mysql server | ///////////////////////////////
( |
+-----+ |
| www1| -------------------------> -+
Apache web server

You need to move db1 server database called blogdb and its users to db2 server.
Install MySQL On DB2

Use the apt-get or yum command to install mysql on DB2 server:
$ sudo apt-get install mysql-server mysql-client
$ sudo service mysql start
# set root password for new installation
$ mysqladmin -u root password NEWPASSWORD
$ sudo yum install mysql-server mysql
$ sudo chkconfig mysql on
$ sudo service mysql start
# set root password for new installation
$ mysqladmin -u root password NEWPASSWORD
Make sure OpenSSH server is also installed on DB2.
Get Current MySQL, Usernames, Hostname, And Database Names

Type the following command at shell prompt to list username and hostname list, enter:

mysql -u root -B -N -p -e "SELECT user, host FROM user" mysql

Sample outputs:

root localhost

The first column is mysql username and second one is network host names. Now, type the following command to get exact details about grants and password for each user from above list:

mysql -u root -p -B -N -e"SHOW GRANTS FOR 'userName'@hostName"
mysql -u root -p -B -N -e"SHOW GRANTS FOR 'vivek'@"

Sample outputs:

GRANT USAGE ON *.* TO 'vivek'@'' IDENTIFIED BY PASSWORD 'somePasswordMd5'
GRANT ALL PRIVILEGES ON `blogdb`.* TO 'vivek'@''


* vivek - MySQL login username
* - Another server or workstation to access this mysql server
* somePasswordMd5 - Password stored in mysql database which is not in a clear text format
* blogdb - Your database name

Now, you've all info and you can move database and users to a new server called db2 as follows using the combination of OpenSSH ssh client and mysql clients as follows:

ssh user@db2 mysql -u root -p'password' -e "create database IF NOT EXISTS blogdb;"
ssh user@db2 mysql -u root -p'password' -e "GRANT USAGE ON *.* TO 'vivek'@'' IDENTIFIED BY PASSWORD 'somePasswordMd5';"
ssh user@db2 mysql -u root -p'password' -e "GRANT ALL PRIVILEGES ON `blogdb`.* TO 'vivek'@'';"
mysqldump -u root -p'password' -h 'localhost' blogdb | ssh user@db2 mysql -u root -p'password' blogdb

You can test it as follows from Apache web server:
$ mysql -u vivek -h -p blogdb -e 'show tables;'
A Note About Web Applications

Finally, you need to make changes to your application to point out to new a database server called DB2. For example, change the following from:

$DB_SERVER = "";
$DB_USER = "vivek";
$DB_PASS = "your-password";
$DB_NAME = "blogdb";


$DB_SERVER = "";
$DB_USER = "vivek";
$DB_PASS = "your-password";
$DB_NAME = "blogdb";

A Sample Shell Script To Migrate Database
# Copyright (c) 2005 nixCraft project 
# This script is licensed under GNU GPL version 2.0 or above
# Author Vivek Gite 
# ------------------------------------------------------------
# SETME First - local mysql user/pass
# SETME First - remote mysql user/pass
# SETME First - remote mysql ssh info
# Make sure ssh keys are set
# sql file to hold grants and db info locally
#### No editing below #####
# Input data
# Die if no input given
[ $# -eq 0 ] && { echo "Usage: $0 MySQLDatabaseName MySQLUserName"; exit 1; }
# Make sure you can connect to local db server
mysqladmin -u "$_lusr" -p"$_lpass" -h "$_lhost"  ping &>/dev/null || { echo "Error: Mysql server is not online or set correct values for _lusr, _lpass, and _lhost"; exit 2; }
# Make sure database exists
mysql -u "$_lusr" -p"$_lpass" -h "$_lhost" -N -B  -e'show databases;' | grep -q "^${_db}$" ||  { echo "Error: Database $_db not found."; exit 3; }
##### Step 1: Okay build .sql file with db and users, password info ####
echo "*** Getting info about $_db..."
echo "create database IF NOT EXISTS $_db; " > "$_tmp"
# Build mysql query to grab all privs and user@host combo for given db_username
mysql -u "$_lusr" -p"$_lpass" -h "$_lhost" -B -N \
-e "SELECT DISTINCT CONCAT('SHOW GRANTS FOR ''',user,'''@''',host,''';') AS query FROM user" \
mysql \
| mysql  -u "$_lusr" -p"$_lpass" -h "$_lhost" \
| grep  "$_user" \
|  sed 's/Grants for .*/#### &/' >> "$_tmp"
##### Step 2: send .sql file to remote server ####
echo "*** Creating $_db on ${rsshhost}..."
scp "$_tmp" ${_rsshusr}@${_rsshhost}:/tmp/
#### Step 3: Create db and load users into remote db server ####
ssh ${_rsshusr}@${_rsshhost} mysql -u "$_rusr" -p"$_rpass" -h "$_rhost" < "$_tmp"
#### Step 4: Send mysql database and all data ####
echo "*** Exporting $_db from $HOSTNAME to ${_rsshhost}..."
mysqldump -u "$_lusr" -p"$_lpass" -h "$_lhost" "$_db" | ssh ${_rsshusr}@${_rsshhost} mysql -u  -u "$_rusr" -p"$_rpass" -h "$_rhost" "$_db"
rm -f "$_tmp"
How Do I Use This Script? Download the above script and edit it to set the following as per your setup: # SETME First - local mysql DB1 admin user/password _lusr="root" _lpass="MySQLPassword" _lhost="localhost" # SETME First - remote mysql DB2 admin user/password _rusr="root" _rpass="mySQLPassword" _rhost="localhost" # Remote SSH Server (DB2 SSH Server) # Make sure ssh keys are set _rsshusr="vivek" _rsshhost="" In this example, migrate a database called wiki with wikiuser username: $ ./ wiki wikiuser Server moved - 14/July/2010 Dear User, In the last two days nixcraft moved to the new server (details about our older setup are here). No data is lost and most of the stuff is back as usual. The new server is much more stable. However, required libraries for RSS feed and PDF file generation code are not installed. I will fixed it ASAP. If any one see any other errors or 404 errors, please send me an email at Please ignore rss feed which is currently showing all old entires in your feed. My apologies for the temporary inconvenience and flooding your rss feed and inbox. The IPv6 AAAA entries will be published later on this weekend. REFERENCES

HowTo: Debug Crashed Linux Application Core Files Like A Pro

SkyHi @ Friday, September 10, 2010
Core dumps are often used to diagnose or debug errors in Linux or UNIX programs. Core dumps can serve as useful debugging aids for sys admins to find out why Application like Lighttpd, Apache, PHP-CGI or any other program crashed. Many vendors and open source project author requests a core file to troubleshoot a program. A core file is generated when an application program abnormally terminates due to bug, operating system security protection schema, or program simply try to write beyond the area of memory it has allocated, and so on. This article explains how to turn on core file support and track down bugs in programs.

Turn On Core File Creation Support

By default most Linux distributions turn off core file creation (at least this is true for RHEL, CentOS, Fedora and Suse Linux). You need to use the ulimit command to configure core files.

See The Current Core File Limits

Type the following command:
# ulimit -c
Sample outputs:
The output 0 (zero) means core file is not created.

Change Core File Limits

In this example, set the size limit of core files to 75000 bytes:
# ulimit -c 75000

HowTo: Enable Core File Dumps For Application Crashes And Segmentation Faults

Edit /etc/profile file and find line that read as follows to make persistent configuration:
ulimit -S -c 0 > /dev/null 2>&1
Update it as follows:
ulimit -c unlimited >/dev/null 2>&1
Save and close the file. Edit /etc/sysctl.conf, enter:
# vi /etc/sysctl.conf
Append the following lines:
kernel.core_uses_pid = 1
kernel.core_pattern = /tmp/core-%e-%s-%u-%g-%p-%t
fs.suid_dumpable = 2
Save and close the file. Where,
  1. kernel.core_uses_pid = 1 - Appends the coring processes PID to the core file name.
  2. fs.suid_dumpable = 2 - Make sure you get core dumps for setuid programs.
  3. kernel.core_pattern = /tmp/core-%e-%s-%u-%g-%p-%t - When the application terminates abnormally, a core file should appear in the /tmp. The kernel.core_pattern sysctl controls exact location of core file. You can define the core file name with the following template whih can contain % specifiers which are substituted by the following values when a core file is created:
    • %% - A single % character
    • %p - PID of dumped process
    • %u - real UID of dumped process
    • %g - real GID of dumped process
    • %s - number of signal causing dump
    • %t - time of dump (seconds since 0:00h, 1 Jan 1970)
    • %h - hostname (same as ’nodename’ returned by uname(2))
    • %e - executable filename
Finally, enable debugging for all apps, enter (Redhat and friends specific):
# echo "DAEMON_COREFILE_LIMIT='unlimited'" >> /etc/sysconfig/init
Reload the settings in /etc/sysctl.conf by running the following command:
# sysctl -p

How Do I Enable Core Dumping For Specific Deamon?

To enable core dumping for specific deamons, add the following line in the /etc/sysconfig/daemon-file file. In this example, edit /etc/init.d/lighttped and add line as follows:
Please note that DAEMON_COREFILE_LIMIT is Redhat specific, for all other distro add configuration as follows:
ulimit -c unlimited >/dev/null 2>&1
echo /tmp/core-%e-%s-%u-%g-%p-%t > /proc/sys/kernel/core_pattern
Save and close the file. Restart / reload lighttpd:
# /etc/init.d/lighttpd restart
# su - lighttpd
$ ulimit -c

Sample outputs:
Now, you can send core files to vendor or software writes.

How Do I Read Core Files?

You need use the gdb command as follows:
$ gdb /path/to/application /path/to/corefile
See the gdb command man page for more information.

strace command

System administrators, diagnosticians and trouble-shooters will find it invaluable for solving problems with programs for which the source is not readily available since they do not need to be recompiled in order to trace them. This is also useful to submit bug reports to open source developers. See how to use the strace command under Linux to debug the problems.

Recommended readings:

Stay tunned for gdb tutorial which will explains how to use generated core file to track down problem.


Email with Attachment to trash folder

SkyHi @ Friday, September 10, 2010
I have read carefully and created several filters to improve the chances of having only “real” mail in my Inbox. Yahoo filters are improperly directing properly addressed and desired mail to the Trash, where I had designated the spam to go.

Can you suggest what is amuck?

Have you had the same experience and found a solution?

Thanks in advance.

reply to Dr. Dave, there are several reasons that the mail will directly to the Trash folder, one of those reason is the filter setting, as Dare2 said because of encoding containing the “words” that automatically to Trash, I am agree with this point since we did many test on this problems and finally realized that some filter setting will cause other mails directly to Trash folder even there is no exact match with the filter, and we did another test baed on same set of attachments, and the founding told us that it may because of the encoding problem. Just a message to let you know that it’s possible for a mail directly to the Trash folder even novody click “This is Spam” or “delete” button. Hope this help, thanks

Read this on why messages, with attatchments, are filtered to the trash can and what you can do to stop it from occurring.

Messages with attachments are routed to my trash folder. Why?

Messages containing attachments that are repeatedly routed to your trash folder are most likely being filtered based on the type of encoding associated with the attachment.

Yahoo! Mail uses MIME (Multi-Purpose Internet Mail Extensions) encoding for messages sent with attachments. MIME is the email protocol that lets people exchange different kinds of data files on the Internet.

For example, if a filter is created to route messages that contain “DVD” to the trash folder, the letters may appear in the encoding associated with the attachment. Therefore, any message with an attachment that has this appearing in the encoding is going to be routed to the trash folder. This can be resolved by editing the filter.

To edit a filter:

Log into Yahoo! Mail at

Click “Mail Options” in the upper-right area of the page.

Click the “Filters” link.

Select the appropriate filter, then click “Edit.”

Where you have an entry that is three letters such as; DVD, VHS, CD, etc., enter a space before and after the letters. Do this by placing the cursor before the letters and hitting the spacebar on your keyboard once, then place the cursor after the letters and hit the spacebar again.

Please Note: Messages containing attachments will now be routed to your inbox. If you see messages routed to the trash folder, check to ensure you do not have other filters in place that may be causing messages to be routed to another location other than the inbox.


Thursday, September 9, 2010


SkyHi @ Thursday, September 09, 2010
Occasionally, you may receive an email of this kind:

This text is part of the internal format of your mail folder, and is not
a real message. It is created automatically by the mail system
software. If deleted, important folder data will be lost, and it will be
re-created with the data reset to initial values.


The reason:

Using an IMAP client (eg webmail) and a POP client can lead to this message being generated.
The message is automatically created by IMAP clients for internal record keeping and is ignored (not displayed) by most email clients.

You can ignore this message. If this message is downloaded to your desktop via a POP client, you can delete it.

Yes, you can delete this message. This message is generated every time you use the webmail interface. Some technical information:
Webmail uses a protocol called IMAP to retrieve mail. Your Eudora, Outlook, etc. program at home is using POP3 to retrieve mail. The little postman working with IMAP generates that message everytime you go to webmail if it doesn\'t see it there, because it thinks it needs it. The little postman working for POP3 sees that email and thinks it\'s new mail so it download it. Get it? It\'s easy... every time you take that message away with POP3, the IMAP guy puts it back. Nothing we can do about this, we didn\'t invent the protocols.
Solution 1: Don\'t worry about it and delete it every time you see it.
Solution 2: If it really bothers you, 1. Stop using webmail. Not recommended if you are using the spam filters.
2. Setup your software at home to retrieve mail via IMAP instead of POP3 (Please don\'t start sending email asking on how to do this.) You\'re on your own :)
I personally use Solution 1.
I hope this puts an end to your questions about this and clarifies it a bit for you.


Delete a Message Stuck in the Outbox

SkyHi @ Thursday, September 09, 2010
If you attempt to send a message that is too large for your
mail server you may be unable to delete it from the Outbox. An error dialog
will say that the MAPI spooler has begun sending the message.

First, try setting Outlook offline using the File, Offline menu. Wait
a few minutes (about 5) or so or close Outlook and reopen.

If you are unable to delete the message while in offline mode or cannot
go into offline mode, you'll need to change your default delivery location.

The following steps are for Outlook 2002/2003, the steps for other
versions are the similar, although the menus are different. For Outlook 2007
and 2010, see Delete a Message Stuck in the Outbox in Outlook 2007/2010

  1. Add
    a new PST using the File, New, Outlook Data File menu.

  2. Open the Email Accounts dialog on the Tools menu.

  3. Select view or Change existing email accounts and click Next.


  4. Select
    the new personal folders file from the Deliver new email to the
    following location list.

  5. Click Finish and restart Outlook.

  6. Show the folder list, using Ctrl+6 if necessary, and find the old
    Outbox. Delete the message.

  7. Repeat steps 2, 3, 4, and 5 to restore the original pst as the
    default delivery location.

  8. Show the folder list and move any new messages from the new pst 
    to the original pst.


  9. Right
    click on the new folder's name and Choose Close to remove it from your

Updated Wedn


Move it to the Draft Folder

And then set about removing the contents from there. If you
are unable to delete it there you can change it's destination to
yourself as a test message with everything removed and then delete it
from the Sent Items and received items.

Of course if this is on a
Network with a server you may have problems getting rid of it depending
on what Mail System is being used. But if you still have it locally you
can move it to the Drafts Folder and work on it from there.


Delete an item stuck in the Outbox folder:
- Load Outlook.
- Put Outlook in offline mode (File -> Work Offline: enable).
- Exit Outlook.
- Load Outlook in its safe mode ("outlook.exe /safe").
- Delete the stuck item in the Outbox folder.  If you do not want the
item to move into the Deleted Items folder, use Shift+Del to
permanently delete the item.
- Put Outlook in online mode (File -> Work Offline: disable).
- Restart Outlook in its normal mode.

E-mail is NOT a reliable file transfer mechanism.  It was not intended or
designed for that.  It was designed to send lots of small messages.  There
is no CRC check on the file to ensure integrity.  There is no resume to
re-retrieve the file if the e-mail download fails.  There is no guarantee
the e-mail will arrive uncorrupted.  Large e-mails can generate timeouts and
retries due to the delay when anti-virus programs interrogate their content.

Do not use e-mail to send large files.  It is rude to the recipient.  Not
every recipient might want your large file.  Not every recipient has
high-speed broadband Internet access.  Many users still use slow dial-up
access, especially if all they do is e-mail.  You waste your e-mail
provider's disk space and their bandwidth to send a huge e-mail.  You waste
the e-mail provider's disk space and bandwidth at the recipient's end.  You
eat up the disk quota for the recipient's mailbox (which could render it
unusable so further e-mails get rejected due to a full mailbox).  You
irritate users still on dial-up that have to wait eons waiting to download
your huge e-mail. Some users have usage quotas (i.e., so many bytes/month)
and you waste it with a file that they may not want.  Don't be insensitive
to recipients of your e-mails.  Take the large file out of the e-mail.

Save the file in online storage and send the recipient a URL link to the
file.  Your e-mail remains small.  It is more likely to arrive.  It is more
likely to be seen.  The recipient can decide whether or not and when to
download your large file.  Be polite by sending small e-mails.

Your ISP probably allows many gigabytes of online storage for personal web
pages.  Upload your file there and provide a URL link to it.  Other methods
(of using online storage), all free, are:              (50GB max quota, 2GB max file size)            (500MB max file size)         (300MB max file size)          (10GB max file size)           (300MB max file size)           (500MB max file size)    (1GB max file size)                  (500MB max file size)             (500MB max file size)

If it is sensitive content and when storing it online in a public storage
area or to guard it against whomever operates the online storage service,
remember to encrypt it.