Tuesday, June 8, 2010

how to tune Apache so as to survive a Slashdotting

SkyHi @ Tuesday, June 08, 2010
Like many techno-geeks I host my LAMP website on a cheap ($150) computer and my broadband connection. I have also wondered what would happen if my site was linked on Slashdot or Digg. Specifically, would my setup be able to survive the “Slashdot Effect?” A Pentium 100mhz can easily saturate a T1's worth of bandwidth and my upload speed is capped (supposedly) at 384kbps, so the server should easily be able to handle that. My bandwidth will be saturated before the server is incapacitated, at least that's the idea.
The machine I use for my web server is a $150 PC that I bought from Fry's one day (I always buy their $150 PC's when they're in stock). Here are the relevant specs on my little server:
CPU: AMD Athlon 2600+
RAM: 512MB
Hard Drive: 40GB 7200RPM
Software: Debian Linux, MySQL, Apache, PHP, WordPress
There is additional software installed on this machine because it is also used as a desktop computer. However, none of that software is important for the purposes of this article.
The RAM has been upgraded since I purchased the machine from Fry's because it originally came with 128MB, which is a little low for my tastes. The only other upgrade was a new CPU fan and that was out of personal preference, the default fan was just too loud.
Below are some directives in my httpd.conf and some general recommendations that I think are vital to helping you survive a good Slashdotting on low-budget hardware.
  • MaxKeepAliveRequests 0 The KeepAlive directive in httpd.conf allows persistent connections to the web server, so that new connection does not have to be initiated for each request. Setting the MaxKeepAliveRequests directive to 0 enables unlimited number of requests per connection, which makes sense if you think about it. Why allow persistent connections but then terminate them after a short period of time?
  • KeepAliveTimeout 15
    Because persistent connections are allowed, it is important that they are not kept open indefinitely. This directive will close the connection after 15 seconds of inactivity.
  • MinSpareServers 15
    This is the minimum number of spare servers you want running at any given time. This way, if multiple simultaneous requests are received there will already be child processes running to handle them. Setting this number too high is a waste of system resources and setting it too low will cause the system to slow down.
  • MaxSpareServers 65
    Same as above, but the maximum child processes running at any given time.
  • StartServers 15
    This is the number of servers Apache will start initially. As more servers handle requests a minimum of 15 spare servers will run up to the maximum of 64.
  • MaxClients 500
    This is the maximum number of simultaneous clients that can connect to the server at any given time. Setting this number too low will result in users being locked out of the server under normal traffic situations and setting it too high will result in your server being so overloaded that all the requests timeout anyway. I think 500 is about right for most people's needs.
  • MaxRequestsPerChild 100000
    Sets the maximum number of requests each child process will handle. This is mostly to prevent memory leaks and other mishaps but is important nonetheless. Setting this too low will cause a large portion of child processes to end for no real reason, thus slowing down the site. This could be set to 0 (unlimited) but that would negate any protection from valid issues like memory leaks.
  • HostnameLookups off
    This prevents DNS lookups of all the visitors to the site, I am pretty sure it's off by default. If it's on in your httpd.conf I would recommend turning it off.

I minimize graphics on my site, and use css instead (where I can). This is pretty easy with WordPress, depending on which theme you use. I stay away from themes with a lot of images and I tend not to put any in my posts either. They're just too much of a drain on bandwidth, especially if you have a lot of traffic. On top of all that, I don't really like seeing graphics when I go to other sites. Most of the time they just get in the way of the information.

As far as static pages go, that isn't much of an option for me. Everything in WordPress is dynamically pulled from the database unless certain plug-ins are installed and since my upload speed is the main bottleneck in my implementation, static pages aren't really a factor. However, if you have a faster upload speed, then having a cache of static pages would speed things up for you.


Another thing that will help with bandwidth if you submit a link to one of the larger sites (Digg, Slashdot, etc.) is to use CoralCDN. CoralCDN is essentially a caching/proxy service that will reduce the drain on your bandwidth. All you have to do to use it is append “.nyud:net:8090” (without the quotes) to any link you submit. All requests for that link will then be automatically routed through CoralCDN.


Those are just a few things you can do to help avoid having your server killed by Slashdot or Digg. The Apache configuration changes are important, but so is having a simple site that is low on graphics and other bandwidth intensive content. I'm sure there are many other things that can be done, and I don't claim to be an expert in this field (hence the general recommendations). So far, the previously mentioned things have helped my site stay up under some heavy traffic, but I have yet to be Slashdotted (thankfully?). If anyone else has recommendations on additional precautions I can take, I'm more than happy to hear them. If this gets onto Digg, Slashdot, etc. then it will be a good test of the things I've mentioned. We'll have to wait and see.




echo '##### Apache Load Check #####' ;\
watch \
"echo "CTRL+C TO EXIT" ;\
clear ;\
echo vmstat ;\
vmstat ;\
echo Load ;\
w ;\
echo Apache Processes ;\
ps -elf |grep 'http' |wc -l ;\
echo Active Apache Conections ;\
netstat -nalp |grep ':80 ' |grep 'ESTABLISHED'|wc -l ;\
echo Apache Conections ;\
netstat -nalp |grep ':80 ' |wc -l ;\
echo SYN Conections ;\
netstat -nalp |grep 'SYN' |wc -l ;\
echo IPCS ;\
ipcs |grep 0x0 |wc -l" ;\
echo '##### End Apache Load Check #####' 



If you see a lot of SYN connections then you still need to increase MaxSpareServers:
 #netstat -nalp | grep ':80 ' | grep SYN



REFERENCES


http://blogcritics.org/scitech/article/open-source-configuring-apache-dont-succumb/

Tuning Apache

SkyHi @ Tuesday, June 08, 2010
There was a link on Digg a couple of days ago to an article about how to tune Apache so as to survive a Slashdotting. After reading it through, I came to the conclusion that the author had no idea what he was talking about. Not only did he admit that he had never experienced the "Slashdot Effect", but his advice was just plain wrong. I offered a few comments there, but I figured that I should elaborate on a few of them here. I'll post each major configuration topic as a new blog entry, and today's entry is about HTTP's Keep-Alive feature.

A brief history of Keep-Alives
The original HTTP protocol did not allow keep-alives, which meant that a connection was made to the server for each file that needed to be downloaded. This was a very inefficient method of doing things, especially since web pages typically had several files that needed to be downloaded in order to be properly displayed. Why was it inefficient? For two reasons:
  1. Each connection requires an overhead of at least 3 packets to be initiated (SYN, SYN-ACK, and ACK packets). This means that at least three round-trips are required to open a connection, which obviously slowed things down.
  2. Due to the nature of TCP, which underlies HTTP, a connection gets "faster" the longer it is open. By continously opening and closing new connections, HTTP would never be able to fully utilize its available bandwidth.
The designers of HTTP realized this weakness in the protocol, and took steps to correct it in the next version of HTTP. This new version of HTTP incorporated the concepts of keep-alives, where a client could keep a connection to the web server open indefinitely, or at least as long as the server permitted. Although this somewhat went against HTTP's original design goal of being "stateless", it allowed for it to overcome its speed and overhead problems.

A brief introduction to Apache
Now let's examine how Apache works. When you start Apache, a main "coordinator" process is created. This main process is responsible for accepting incoming connections and passing them off to "worker" processes that it creates. These workers then read users' requests and send back responses. Once a worker is done servicing a user's requests, it reports back to the main process and then waits for a new connection to be handed to it.

Apache and Keep-Alives
So, in theory, keep-alives are a great thing. They allow web clients and servers to fully utilize their available bandwidth, and reduces latency by eliminating the overhead of frequently opening new connections. In a perfect world, you would want Apache's KeepAliveTimeout setting to be "infinity", so that web clients maintain a connection to the web server for as long as possible and thus everything on your web site pulls up as fast as possible.

Apache allows you to configure its behavior in regard to keep-alives through a few options in its configuration file:
  • KeepAlive: either On or Off, depending on whether Apache should allow connections to be used for multiple requests
  • KeepAliveTimeout: how long, in seconds, Apache will wait after a request has been answered for another request before closing the connection
  • MaxKeepAliveRequests: how many total requests a client can issue across a single connection
  • MaxClients: the total number of worker processes that Apache will allow at any given time
The default Apache configuration file sets KeepAlive to be on, with a KeepAliveTimeout of 15 seconds and MaxKeepAliveRequests of 100. The MaxClients setting is set to 150.

Apache meets its match
Unfortunately, nothing in life is free, not even keep-alives. Each client connection requires Apache to create (or use a waiting) worker process to service its requests. These worker processes can only handle one connection at a time, and each connection will last at least 15 seconds. Apache will create a new worker process for each new connection until it hits its limit of MaxClients at 150. Thus, the cost of a keep-alive is one worker process for the KeepAliveTimeout.

Now imagine what happens when 1,000 web clients try to access your web site at the same moment (e.g. when it first shows up on Slashdot). The first 150 clients will successfully connect to your web server, because Apache will create workers to service their requests. However, those web clients do not immediately leave; after they've downloaded your page, they will hold open their connections for 15 seconds until your server forces their connection to close. The next 850 clients will be unable to access the web server, as all of the available Apache worker processes will be used up, waiting for 15 seconds on the unused connections to the first 150 clients. Some of those 850 clients will queue up and wait for an available Apache process to service their request, but most will give up.

Perhaps some readers are wondering why you wouldn't just increase the MaxClients setting to something high enough to handle your peak load, like 2000 or something. This is a very bad idea; you can increase Apache's MaxClients, but only at your own peril. Because each Apache process consumes a bit of memory, you can only fit a certain number in memory before the web server begins to violently thrash, swapping things between RAM and the hard drive in a futile attempt to make it work. The result is a totally unresponsive server; by increasing MaxClients too high, you will have caused your own demise. I will talk about how to figure out a good value for MaxClients in a future post, but a good rule of thumb might be to divide your total RAM by 5 megabytes. Thus, a server with 512 megabytes of RAM could probably handle a MaxClients setting of 100. This is probably a somewhat conservative estimate, but it should give you a starting point.

A partial solution
So how do you fix the problem, other than by adding many gigabytes of RAM to the server? One easy way to get around this limitation is to either reduce the KeepAliveTimeout to a mere second or two, or else to simply turn KeepAlive off completely. I have found that turning it down to 2 seconds seems to give the client enough time to request all of the files needed for a page without having to open multiple connections, yet allows Apache to terminate the connection soon enough to be able to handle many more clients than usual.

One interesting thing of which to take note is what the major Apache-based web sites allow, in terms of keep-alive timeouts. In my (very brief) experiments, it seems that CNN, Yahoo, craigslist, and Slashdot don't permit keep-alives at all, while the BBC has a very short keep-alive timeout of under 5 seconds. On the other hand, there are several other major Apache-based sites that do use a large keep-alive timeout (Apple, CNET, etc...), but they may have decided that they would prefer to take the performance hit so that they can have the "fastest" web sites as possible.

Of course, this isn't a perfect solution. It would be nice to be able to have both high-performance as well as long-lived client connections. Apache 2.2, from what I understand, includes an experimental module that allows keep-alives to be handled very efficiently. If it turns out to work well, then it could be a near-perfect solution to the problem. It does have its problems (i.e. it seems to require a threaded MPM, which is not recommended if you use PHP), but it could be incredibly useful in some situations.




=================================================================

Configure Common Apache Options

Related Documentation
Apache 2.0 Documentation

Purpose

The following procedures explain how to configure Apache server options using InterWorx. The most common configuration options are exposed in the InterWorx Cluster Panel interface. As with many of the system services, a system administrator still retains the ability to configure the service by editing the configuration file.

Procedure - Change A Commonly Configured Web Server Option

  1. Click the Icon System Services menu item if it is not already open.
  2. Click the Icon Web Server menu item.
  3. Locate the Apache Server Options section.
  4. Change the option(s) you wish to update to the desired value(s).
  5. Click the button.
  6. You will see the following set of messages at the top of the screen:
    Service successfully restarted
    Apache options updated successfully

Apache Server Options

Max Clients

The maximum number of simultaneous requests that will be served. Possible values range from 1 to 20000.
Apache docs on Max Clients directive

Server Limit

The maximum configured value for the max clients directive for the lifetime of the Apache Process. Possible values range from 1 to 20000.
Apache docs on Server Limit directive

Start Servers

The number of child server process created on server startup. As the number of processes is dynamically controlled depending on the load, there is usually little reason to adjust this parameter. Possible values range from 1 to 20000.
Apache docs on Start Servers directive

Spare Servers (min)

The minimum number of idle child processes. An idle process is one which is not handling a request. Possible values range from 1 to 20000.
Apache docs on the Minimum Number of Spare Servers

Spare Servers (max)

The maximum number of idle child processes. An idle process is one which is not handling a request. Possible values range from 1 to 20000. The spare servers directives are used to help deal with spikes in web traffic.
Apache docs on the Maximum Number of Spare Servers

Max Requests per Server

The limit on the number of requests that an individual child server process will handle. Setting this to 0 will allow for an unlimited number of requests to be handled. Possible values range from 0 to 1000000.
Apache docs on Maximum Requests Per Child Process

Timeout

The amount of time Apache will wait for any of three things:
  • The total amount of time for a GET request to be received.
  • The amount of time between receipt of TCP packets on a POST or PUT request.
  • The amount of time between ACKs on transmissions of TCP packets in responses.
Apache docs on TimeOut directive

Keepalive

Turning the Keepalive directive on will provide long-lived HTTP sessions which allow multiple requests to be sent over the same TCP connection. In some instances, turning keepalive on has resulted in a reduction in latency for HTML documents containing many images.
Apache docs on Keepalive directive

Keepalive Requests

The number of requests allowed per connection. If this is set to 0, an ulimited number of requests is allowed.
Possible values range from 0 to 65336.
Apache docs on Keepalive Requests directive
This value will only be able to be changed if Keepalive is set to on.

Keepalive Timeout

The number of seconds Apache will wait for a request before closing the connection.
Apache docs on Keepalive Timeout directive
This value will only be able to be changed if Keepalive is set to on.

REFERENCES
http://virtualthreads.blogspot.com/2006/01/tuning-apache-part-1.html
http://www.interworx.com/support/docs/nodeworx/http/howto-edit-options
http://warrenkonkel.com/2007/5/14/apache-tuning-maxclients-and-keep-alive

Tuning a MySQL server in 5 minutes

SkyHi @ Tuesday, June 08, 2010

How to get MySQL to run optimally with the resources you have, or to scale MySQL down in resource-constrained environments.

Continuing with the Server management series, this time we'll learn how to tune a MySQL server to handle high server loads. Obviously, this piece assumes that you're using MySQL to serve a dynamic site. If this is not the case, you'll still find this article useful, but you'll have to derive your own interpretations out of it.

If you recall the article titled Tuning an Apache server in 5 minutes, you'll also know that there's a tunable for Apache which lets you set the maximum number of Apache processes that run on your server. Once you've tuned Apache, it only makes sense to tune MySQL to handle that many connections simultaneously.

Before you go on, "Tuning a MySQL server in 5 minutes" is indeed an exaggeration. I concede you that. Database tuning is so much more than what this article says. I don't mean to disrespect DBAs: they usually perform large amounts of magic in order for databases to get from abysmal to top-dog performance. But the first step to having a site that doesn't break with traffic surges is usually what I'm about to discuss.

Tips for very high loads

Okay, on to our business. First off, if you're handling a very large number of simultaneous connections to your Apache server (in the order of 250 or higher), it would make sense to offload the database processing to a different server. That way, you'll have more control over loads exerted by Apache and by MySQL, separately.

If you're short on money, that's of course not an option. Keep reading to find out an acceptable compromise then.

The (important) differences between static and dynamic page loads

For the purpose of this article, we'll name two distinct types of connections to your Web server:

Dynamic requests
Any request to your Apache server that causes a MySQL connection to be opened and database queries to be emitted. A good example is a PHP page which requests a list of products from your database.
Static requests
Any request to your Apache server which doesn't incur the cost of a MySQL connection. Examples of these are static HTML pages or file downloads.

And, of course, you'll need to discriminate between those two.

Figuring out the right maximum number of MySQL connections

Usually, a good starting estimate is one dynamic request for each 5 requests. That's because most pages load CSS style sheets, and images, although those files do not get loaded on subsequent requests from the same visitors (partly due to browser caching). To get an exact number for this ratio, however, you'll need to analyze your Apache access_log log file (manually, or via the known Analog or Webalizer log analysis packages).

Once you've arrived to an accurate estimate for your scenario, multiply that ratio by the maximum number of connections you've configured on your Apache server. For example, if your Apache server is serving a maximum of 256 clients (which is a lot), and your ratio of dynamic requests vs. all requests is 1/8, you'd have an expected maximum of 32 database connections. Just to be on the safe side, multiply that by two, and you'll have a foolproof figure. But if you want to be really, really certain, you should always expect a maximum of 256 database connections.

Setting the maximum connections on your MySQL server

Using your favorite text editor, as root, open up the /etc/my.cnf file (the location of the file may vary according to your distribution). You should see something like this:

[ecuagol@216-55-181-30 ~]$ cat /etc/my.cnf
[mysqld]
safe-show-database
innodb_data_file_path=ibdata1:10M:autoextend
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock

[mysql.server]
user=mysql
basedir=/var/lib


[safe_mysqld]
err-log=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

We'll be dealing with the [mysqld] section. Under that section, add two new parameters (or modify them, if they are already there and they aren't commented):

  • set-variable = max_connections = 60
  • set-variable = max_user_connections = 60

MySQL defaults to 1 max connection, with 1 max connection per user. Evidently, you'll be replacing 60 with your expected maximum number of connections. With these settings, you'll be on the safe side.

The reason you're setting both max_connections and max_user_connections is because, generally, Apache appears to your MySQL database server as one single user. So, you need to raise them both.

You may also want to increase other parameters, if you'll be expecting heavy loads or unusual queries:

  • set-variable = max_allowed_packet=1M (sanity check to stop runaway queries)
  • set-variable = max_connect_errors=999999
  • set-variable = table_cache=1200

Where 1200 should be max_user_connections multiplied by the maximum number of JOINs your heaviest SQL query contains.

After tuning your server, restart MySQL (/sbin/service mysqld restart usually does the trick on Fedora Core).

Conclusions and final words

That's it! Hope I'll see you around for the next installment. By the way, if you spot any inaccuracies or errors, feel free to comment on it using the comment form right below this article. Happy hacking!


REFERENCES

http://rudd-o.com/en/linux-and-free-software/tuning-a-mysql-server-in-5-minutes

Tuning an Apache server in 5 minutes

SkyHi @ Tuesday, June 08, 2010

How to get Apache to run fine without stampeding occurring in high-traffic, low-resource situations.

Hello again. This time, I'll show you how to make a Web server running Apache and Linux survive heavy loads.

Before we go on, you should know something: this is not an article about securing Apache. This is an article about making Apache behave under heavy load conditions.

Okay, now that we're here, let's discuss scalability.

Scalability

Scalability is simply the ability of a server to withstand heavy loads. If you tried to read the last article, Hardening a Linux server in 10 minutes, you probably noticed that this server was down.

That's a scalability fault.

Let's put it in another light. This server has 512 MB of RAM. The surge of traffic (thanks to LinuxToday links pointing to this site) caused the server to fail (more accurately, the MySQL server appeared to hang). Brag all you want about Linux's ability to survive these events, nothing will help you against a misconfigured server.

It all boils down to configuration

In this particular case, the misconfiguration was Apache's. Weighing 13 MB per httpd process (though some of it is shared with other processes), it's pretty simple to understand that a runaway Apache server can bring your server down completely. When your Apache server starts serving a lot of requests, all those processes quickly fill the available memory (physical and virtual). When your Linux server runs out of RAM, it will start killing processes it deems 'memory hogs'. Usually the first ones to go down are the MySQL processes. If you're serving dynamic pages, that's a disaster.

On to Apache configuration

By default, Apache comes preconfigured to serve a maximum of 256 clients simultaneously. This particular configuration setting can be found in the file /etc/httpd/conf/httpd.conf (though the location of the file may vary, depending on the Linux distribution you use).

Whip your favorite text editor out and open that file (remember that you should be doing this as root — the administrative account on the majority of Linux servers out there).

Look for MaxClients. It will probably look like this:

# prefork MPM
StartServers: number of server processes to start
MinSpareServers: minimum number of server processes which are kept spare

MaxSpareServers: maximum number of server processes which are kept spare

ServerLimit: maximum value for MaxClients for the lifetime of the server
MaxClients: maximum number of server processes allowed to start
MaxRequestsPerChild: maximum number of requests a server process serves


StartServers 4
MinSpareServers 3
MaxSpareServers 10
ServerLimit 256
MaxClients 256
MaxRequestsPerChild 10000

That's the configuration section for the prefork module. 99% of the Apache servers out there use the prefork module to serve requests, so unless you have an exotic configuration, you'll be changing these settings.

Time to calculate a good value for the MaxClients directive. Find out how much memory your Apache processes use. Using top, check the RES column. That's the resident memory size. It should say the size in megabytes that your Apache processes are taking. In my example, it's 22m.

Figure out a good value. If your server has 512 MB of RAM (in my case, this is true), and you're sharing your server with MySQL and Sendmail (true in my case, as well), you'll want to reserve about half of it for Apache (256 MB). Divide that by the resident memory each process takes up, and you'll have a number of processes (say, 11). That's the maximum amount of processes you can run without resorting to virtual memory. Resorting to virtual memory (swap) will make your server thrash and become extremely slow.

It's, of course, all about balance. If you have one gigabyte of swap, you may want to raise the number of Apache processes. Raising it too much will cause heavy traffic to spawn lots of Apache processes, bringing your server down.

Setting the MaxClients and StartServers directive

You now have your start value (in our example, it was 11). Change the MaxClients and the ServerLimit directives to it. Save the file and restart Apache (/sbin/service httpd restart does that trick in Fedora Core).

Now it's time to start testing. Keep a root login open to that server. Using your favorite testing tool (ab and wget are good at this), start a storm of connections (more than 1024 simultaneous requests) directed to a page served by your Apache server (ideally, one that exercises the server, like dynamic pages with lots of queries). Issuing the uptime command in your root login should not yield a load average above 1, and the server should respond to commands quickly.

[rudd-o@amauta2 conf]$ uptime
15:54:18 up 1:41, 3 users, load average: 0.86, 0.70, 1.50

Tuning the configuration

That's great. Once the test is finished, duplicate MaxClients and StartServers, and try your storm test again. The load average should be low.

Keep tuning until you hit your maximum desired load average. For servers used interactively often, having a load above 3 is way too much to use the server comfortably. For servers used mostly as real servers, a maximum load average of 10 should be acceptable. More than that, and you'll find yourself needing to reboot the server when experiencing heavy traffic conditions, because no terminal or remote console will respond quickly to commands, and managing the server will be impossible.

Conclusions

That's it! With practice, you'll be able to skip the memory math and learn the ideal setting for any server. Other tuning options you may try (in order of diminishing returns):

  • Eliminating unnecessary Apache modules from the configuration (perhaps uninstalling them altogether, by use of RPM or your favorite distribution's packaging tool)
  • Recompiling Apache, optimizing for memory consumption (the -Os option of gcc)
  • Recompiling Apache, building modules in instead of having them run as modules

Remember: if you have any questions or suggestions, please leave them as comments below. Happy hacking!


REFERENCES

http://zfs-fuse.net/en/linux-and-free-software/tuning-an-apache-server-in-5-minutes

Should MX record point to CNAME records (aliases)?

SkyHi @ Tuesday, June 08, 2010
Though the practice of pointing MX records to CNAME (alias) records is not that uncommon, it certainly isn't in keeping with internet standards.

When you point a MX record to a CNAME, you're in fact inviting double the DNS traffic to your DNS servers. Try this by performing a name resolution query using nslookup:

>nslookup -querytype=MX somedomain.com
somedomain.com MX preference = 5, mail exchanger = mx1.somedomain.com
somedomain.com MX preference = 10, mail exchanger = mx2.somedomain.com
mx2.somedomain.com internet address = 64.31.212.21

As you can see from the above query, the record mx1.somedomain.com is not resolved to an IP address. This is because it's a CNAME.

To resolve the CNAME, the sender's DNS server will have to perform a second query.

Not only is that inefficient, it is in fact explicitly prohibited by RFC 2181.
Section 10.3 of RFC 2181 states:

10.3. MX and NS records

The domain name used as the value of a NS resource record, or part of the value of a MX resource record must not be an alias. Not only is the specification clear on this point, but using an alias in either of these positions neither works as well as might be hoped, nor well fulfills the ambition that may have led to this approach. This domain name must have as its value one or more address records. Currently those will be A records, however in the future other record types giving addressing information may be acceptable. It can also have other RRs, but never a CNAME RR.

Searching for either NS or MX records causes "additional section processing" in which address records associated with the value of the record sought are appended to the answer. This helps avoid needless extra queries that are easily anticipated when the first was made.

Additional section processing does not include CNAME records, let alone the address records that may be associated with the canonical name derived from the alias. Thus, if an alias is used as the value of an NS or MX record, no address will be returned with the NS or MX value. This can cause extra queries, and extra network burden, on every query.

I've always assumed not pointing MX records to CNAME record(s) is merely a best practice or recommendation, and not a requirement. I stand corrected, as DNS geek (Zenprise seems to have more than its fair share of these :) Dmitri pointed out.

REFERENCES
http://exchangepedia.com/blog/2006/12/should-mx-record-point-to-cname-records.html

install PECL APC Cache on CentOS

SkyHi @ Tuesday, June 08, 2010

As Irakli already discussed, the Alternative PHP Cache (APC) is an op-code pre-compiler and a cache system that can boost the performance of a PHP application up to 10 times. Op-code caches are very effective for a Drupal website, since Drupal deals with large number of source files and time spent in parsing them significantly affects performance. However, if you don’t have XAMPP and need to install it on CentOS, you can follow this to get around some of the problems that happen with the default server settings. h2. Install Pre-reqs Using yum install the required prerequisites.




sudo yum install php-devel php-pear httpd-devel


Install APC



Use the command




sudo pecl install apc


at this point you’ll likely see the error




Fatal error: Allowed memory size of 8388608 bytes exhausted (tried to allocate 92160 bytes) in /usr/share/pear/PEAR/PackageFile/v2/Validator.php on line 1831


Apparantly, the PECL/PEAR scripts do not use the settings from /etc/php.ini so you need to update PEAR’s memory settings to give is some more breathing room. Edit the file /usr/share/pear/pearcmd.php and add the following at the beginning:



@ini_set('memory_limit', '16M');


Configure/Restart



Now configure PHP to use the new extension. Create the file /etc/php.d/apc.ini and in that file put:



echo "extension=apc.so" > /etc/php.d/apc.ini


Now restart apache



sudo /etc/init.d/httpd graceful


Maintenance



In the future, if new versions of APC are released, you can easily upgrade them using



sudo pecl upgrade apc


I hope this helps!



REFERENCES

http://www.agileapproach.com/blog-entry/howto-install-pecl-apc-cache-centos-without-xampp

###check comments###

http://2bits.com/articles/installing-php-apc-gnulinux-centos-5.html

http://2bits.com/articles/php-op-code-caches-accelerators-a-must-for-a-large-site.html

http://2bits.com/articles/benchmarking-apc-vs-eaccelerator-using-drupal.html

http://2bits.com/articles/benchmarking-apc-vs-eaccelerator-using-drupal.html

http://2bits.com/articles/benchmarking-drupal-with-php-op-code-caches-apc-eaccelerator-and-xcache-compared.html


http://2bits.com/contents/articles


Installing APC for vBulletin on CentOS 5
Server



You can find out if APC is now running by loading a phpinfo() page (vBulletin AdminCP > Maintenance
> phpinfo) and search for APC block.



Now that it works, go into the includes/config.php file
and enable the datastore class to use APC:

Find:

Code:

// $config['Datastore']['class'] = 'vB_Datastore_Filecache';

and replace it with

Code:

$config['Datastore']['class'] = 'vB_Datastore_APC';

Still in the config.php file you will now find (from 3.7.1 and
up) the following option:

Code:

// $config['Datastore']['prefix'] = '';

I recommend to change this if you have multiple forums running
on the same box. Since each of my forums uses a unique table prefix
based on the domain name, I am using this table prefix for the datastore
APC prefix too. Example; my web site vbulletin-fans.com is vbfans_ as
table prefix. So I will use this for this instance:

Code:

$config['Datastore']['prefix'] = 'vbfans_';

Keep this as simple and short as possible, but unique.



To test if it works your PEAR/APC should have come with a file called
apc.php. You can now copy this to your vBulletin's directory and edit
the file. Set a new user/pass for authentication and load the file in
your browser. When you're done you can remove the apc.php file from this
public dir.



And you're done.



I hope this helps some site owners to get a bit more performance out of
their vBulletin powered community and give their visitors a more snappy
experience. Note that 3.7 support APC but it works just as well without
it. Hopefully in the future (say version 4 and up) vBulletin will make
more and better use of op-cache like APC.




REFERENCES
http://www.vbulletin.com/forum/entry.php?2234-Installing-APC-on-CentOS-5-Server
http://mrfloris.com/vbulletin/installing-apc-on-centos-5-server/



Monday, June 7, 2010

rename file extension recursively

SkyHi @ Monday, June 07, 2010

Rename all *.info files in one folder

rename .info .txt *.info

That should read as..

rename

Do the same operation recursively in a directory tree

##below works in Centos, but no Ubuntu

find . -name "*.info" -exec rename .info .txt {} \;

REFERENCES
http://txt.binnyva.com/2007/03/change-the-extension-of-multiple-files-in-linux/



Linux Mass Rename Recursively using a Bash Script


This example Bash script replaces “.JPG” with “.jpg” recursively in the current directory (It can handle filenames with spaces):

###proved###
#!/bin/bash

find ./ -type f -name "*.JPG" | while read FILE
do
newname=`echo $FILE | sed s/.JPG/.jpg/`
echo $newname
mv "$FILE" "$newname"
done

Convert all characters to lowercase:


#!/bin/bash

find ./ -type f -name "*" | while read FILE
do
newname=`echo $FILE | tr 'a-z' 'A-Z'`
echo $newname
mv "$FILE" "$newname"
done


REFERENCES
http://alexpb.com/notes/articles/2008/09/14/linux-mass-rename-recursivly-using-a-bash-script/