Saturday, January 16, 2010

Wget Trick to Download from Restrictive Sites

SkyHi @ Saturday, January 16, 2010

wget 403 Forbidden
After trick
wget bypassing restrictions
I am often logged in to my servers via SSH, and I need to download a file like a WordPress plugin. I’ve noticed many sites now employ a means of blocking robots like wget from accessing their files. Most of the time they use .htaccess to do this. So a permanent workaround has wget mimick a normal browser.


function wgets()
wget --referer="" --user-agent="Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv: Gecko/20070725 Firefox/" \
--header="Accept:text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5" \
--header="Accept-Language: en-us,en;q=0.5" \
--header="Accept-Encoding: gzip,deflate" \
--header="Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7" \
--header="Keep-Alive: 300" "$@"

Using alias

Add this to your .bash_profile or other shell startup script, or just type it at the prompt. Now just run wget from the command line as usual, i.e. wget -dnv

alias wget='wget --referer="" --user-agent="Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv: Gecko/20070725 Firefox/" --header="Accept:<br />text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5" --header="Accept-Language: en-us,en;q=0.5" --header="Accept-Encoding: gzip,deflate"<br />--header="Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7" --header="Keep-Alive: 300"'<br />

Using custom .wgetrc

Alternatively, you could instead just create or modify your $HOME/.wgetrc file like this. Or download and rename to .wgetrc.wgetrc. Now just run wget from the command line as usual, i.e. wget -dnv

###<br />### Sample Wget initialization file .wgetrc by<br />###<br />##<br />## Local settings (for a user to set in his $HOME/.wgetrc).  It is<br />## *highly* undesirable to put these settings in the global file, since<br />## they are potentially dangerous to "normal" users.<br />##<br />## Even when setting up your own ~/.wgetrc, you should know what you<br />## are doing before doing so.<br />##<br /> <br />header = Accept-Language: en-us,en;q=0.5<br />header = Accept: text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5<br />header = Accept-Encoding: gzip,deflate<br />header = Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7<br />header = Keep-Alive: 300<br />user_agent = Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv: Gecko/20070725 Firefox/<br />referer =<br />

From the command line

wget --referer="" --user-agent="Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv: Gecko/20070725 Firefox/" --header="Accept:<br />text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5" --header="Accept-Language: en-us,en;q=0.5" --header="Accept-Encoding: gzip,deflate"<br />--header="Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7" --header="Keep-Alive: 300" -dnv<br /><br /><br />Reference: <a href=""></a><br />


WGet all the way

SkyHi @ Saturday, January 16, 2010

There are a couple of security auditing frameworks out there, and the temptation is high on creating your own; either in Perl, Ruby, Python and why not in PHP as well.

Needles to say, I too was tempted in creating my own framework. Ideas kept flowing in, the project has been started and then BAM, I’ve read an interesting article on GNUCITIZEN, which made me rethink my strategy…

One of the comments pointed it out very well:

most of the stuff we need is on the shell already. pentesting frameworks is like the new security-testing hype. first we had hundreds of portscanners, then hundreds of webapp MiTM proxies, then hundreds of fuzzers, then hundreds of SQL injectors, now it’s about pentesting frameworks :)

So instead of starting to write redundant code, I started to learn already available command line tools, which have years of development behind and fill in almost every aspect they need to.

Basically I’m building my framework around already available tools, and only code up things that do not exist, or for some very particular cases.

So why WGet?

Well I had to start with something my series of articles (it’s gonna be a series), and wget seemed to be a good starting point.

If you’ve never dealt with wget (which I sincerely doubt), the following description best describes it:

GNU Wget is a free software package for retrieving files using HTTP, HTTPS and FTP, the most widely-used Internet protocols. It is a non-interactive commandline tool, so it may easily be called from scripts, cron jobs, terminals without X-Windows support, etc

Without further useless rambling let’s see in which scenarios you would use wget; apart from downloading psyBNC archives, like seen on many h4×00r websites.

Geek to Live: Mastering Wget

SkyHi @ Saturday, January 16, 2010

Your browser does a good job of fetching web documents and displaying them, but there are times when you need an extra strength download manager to get those tougher HTTP jobs done.

A versatile, old school Unix program called Wget is a highly hackable, handy little tool that can take care of all your downloading needs. Whether you want to mirror an entire web site, automatically download music or movies from a set of favorite weblogs, or transfer huge files painlessly on a slow or intermittent network connection, Wget's for you.

Wget, the "non-interactive network retriever," is called at the command line. The format of a Wget command is:

wget [option]... [URL]...

The URL is the address of the file(s) you want Wget to download. The magic in this little tool is the long menu of options available that make some really neat downloading tasks possible. Here are some examples of what you can do with Wget and a few dashes and letters in the [option] part of the command.

Mirror an entire web site

Say you want to backup your blog or create a local copy of an entire directory of a web site for archiving or reading later. The command:

wget -m

Will save the two pages that exist on the site in a folder named just that on your computer. The -m in the command stands for "mirror this site."

Say you want to retrieve all the pages in a site PLUS the pages that site links to. You'd go with:

wget -H -r --level=1 -k -p

This command says, "Download all the pages (-r, recursive) on plus one level (—level=1) into any other sites it links to (-H, span hosts), and convert the links in the downloaded version to point to the other sites' downloaded version (-k). Oh yeah, and get all the components like images that make up each page (-p)."

Warning: Beware, those with small hard drives! This type of command will download a LOT of data from sites that link out a lot (like blogs)! Don't try to backup the Internet, because you'll run out of disk space!

Resume large file downloads on a flaky connection

Say you're piggybacking the neighbor's wifi and every time someone microwaves popcorn you lose the connection, and your video download (naughty you!) keeps crapping out halfway through. Direct Wget to resume partial downloads for big files on intermittent connections.

To set Wget to resume an interrupted download of this 16MB "Mavericks Surf Highlights 2006: Wipeouts" short from Google Video, use:

wget -c --output-document=mavericks.avi ""

(Apologies for the humungous, non-wrapping URL.)

The -c ("continue") option sets Wget to resume a partial download if the transfer is interrupted. You'll also notice the URL is in quotes, necessary for any address with &'s in it. Also, since that URL is so long, you can specify the name of the output file explicitly - in this case, mavericks.avi.

Schedule hourly downloads of a file

The nice thing about any command line script is that it's very easy to automate. For instance, if there was a constantly-changing file that you wanted to download every hour, say, you could use cron or Windows Task Scheduler and Wget to do just that, or if there was a very large file you wanted your computer to fetch in the middle of the night while you slept instead of right this moment when you need all your bandwidth to get other work done. You could easily schedule the Wget command to run at a later time.

As proof of concept, yesterday I scheduled an hourly download of Lifehacker's daily traffic chart to run automatically. The command looked like this:

wget --output-document=traffic_$(date +\%Y\%m\%d\%H).gif "\%2E249\%2E116\%2E138&p6=HTML&p7=1&p8=\%2E\%3Fa\%3Dstatistics&p9=&rnd=7209"

Notice the use of %Y and %m datetime parameters which result in unique filenames, so each hour the command wouldn't overwrite the file with the same name generated the hour before. Note also that the %'s have to be escaped with a backslash.

Just for fun I threw together a little animated gif of the hourly chart image, that displays the movement of Lifehacker's traffic yesterday from 2PM to midnight:

Automatically download music

This last technique, suggested by Jeff Veen, is by far my favorite use of Wget. These days there are tons of directories, aggregators, filters and weblogs that point off to interesting types of media. Using Wget, you can create a text file list of your favorite sites that say, link to MP3 files, and schedule it to automatically download any newly-added MP3's from those sites each day or week.

First, create a text file called mp3_sites.txt, and list URLs of your favorite sources of music online one per line (like or Be sure to check out my previous feature on how to find free music on the web for more ideas.

Then use the following Wget command to go out and fetch those MP3's:

wget -r -l1 -H -t1 -nd -N -np -A.mp3 -erobots=off -i mp3_sites.txt

That Wget recipe recursively downloads only MP3 files linked from the sites listed in mp3_sites.txt that are newer than any you've already downloaded. There are a few other specifications in there - like to not create a new directory for every music file, to ignore robots.txt and to not crawl up to the parent directory of a link. Jeff breaks it all down in his original post.

The great thing about this technique is that once this command is scheduled, you get an ever-rotating jukebox of new music Wget fetches for you while you sleep. With a good set of trusted sources, you'll never have to go looking for new music again - Wget will do all the work for you.

Install Wget

Wanna give all this a try? Windows users, you can download Wget here; Mac users, go here. An alternative for Windows users interested in more Linuxy goodness is to download and install the Unix emulator Cygwin which includes Wget and a whole slew of other 'nixy utilities, too.

For the full take on all of Wget's secret options sauce, type wget --help or check out the full-on Wget manual online. No matter what your downloading task may be, some combination of Wget's extensive options will get the job done just right.


using wget to grab all images from a web page

SkyHi @ Saturday, January 16, 2010

the command

  • wget -A.jpg -r -l1 -np


  • -A: accept list. in this case we’re accepting all jpgs.
  • -r: recursive
  • -l: levels to recurse
  • -np: no parent, i.e. do not go up in the directory tree.

more on wget

wget on mac os x

wget does not ship with mac os x. you can find a pre-compiled version of wget at status-q. if you don’t want to install wget, you might try out curl, which is already installed.

making curl behave like wget

  • curl -o local.html

this will output to a file, rather than printing to the screen.


wget -P Slides -r -p -nd -t5 -H, -A.jpg,.jpeg,.jpg.1,.jpg.2,.jpeg.1,.jpeg.2 -erobots=off


The Ultimate Wget Download Guide With 15 Awesome Examples

SkyHi @ Saturday, January 16, 2010
wget utility is the best option to download files from internet. wget can pretty much handle all complex download situations including large file downloads, recursive downloads, non-interactive downloads, multiple file downloads etc., In this article let us review how to use wget for various download scenarios using 15 awesome wget examples.

1. Download Single File with wget

The following example downloads a single file from internet and stores in the current directory.
$ wget
While downloading it will show a progress bar with the following information:
  • %age of download completion (for e.g. 31% as shown below)
  • Total amount of bytes downloaded so far (for e.g. 1,213,592 bytes as shown below)
  • Current download speed (for e.g. 68.2K/s as shown below)
  • Remaining time to download (for e.g. eta 34 seconds as shown below)
Download in progress:
$ wget
Saving to: `strx25-'

31% [=================> 1,213,592   68.2K/s  eta 34s
Download completed:
$ wget
Saving to: `strx25-'

100%[======================>] 3,852,374   76.8K/s   in 55s    

2009-09-25 11:15:30 (68.7 KB/s) - `strx25-' saved [3852374/3852374]

2. Download and Store With a Different File name Using wget -O

By default wget will pick the filename from the last word after last forward slash, which may not be appropriate always.
Wrong: Following example will download and store the file with name: download_script.php?src_id=7701
$ wget
Even though the downloaded file is in zip format, it will get stored in the file as shown below.
$ ls
Correct: To correct this issue, we can specify the output file name using the -O option as:
$ wget -O

3. Specify Download Speed / Download Rate Using wget –limit-rate

While executing the wget, by default it will try to occupy full possible bandwidth. This might not be acceptable when you are downloading huge files on production servers. So, to avoid that we can limit the download speed using the –limit-rate as shown below.
In the following example, the download speed is limited to 200k
$ wget --limit-rate=200k

4. Continue the Incomplete Download Using wget -c

Restart a download which got stopped in the middle using wget -c option as shown below.
$ wget -c
This is very helpful when you have initiated a very big file download which got interrupted in the middle. Instead of starting the whole download again, you can start the download from where it got interrupted using option -c
Note: If a download is stopped in middle, when you restart the download again without the option -c, wget will append .1 to the filename automatically as a file with the previous name already exist. If a file with .1 already exist, it will download the file with .2 at the end.

5. Download in the Background Using wget -b

For a huge download, put the download in background using wget option -b as shown below.
$ wget -b
Continuing in background, pid 1984.
Output will be written to `wget-log'.
It will initiate the download and gives back the shell prompt to you. You can always check the status of the download using tail -f as shown below.
$ tail -f wget-log
Saving to: `strx25-'

     0K .......... .......... .......... .......... ..........  1% 65.5K 57s
    50K .......... .......... .......... .......... ..........  2% 85.9K 49s
   100K .......... .......... .......... .......... ..........  3% 83.3K 47s
   150K .......... .......... .......... .......... ..........  5% 86.6K 45s
   200K .......... .......... .......... .......... ..........  6% 33.9K 56s
   250K .......... .......... .......... .......... ..........  7%  182M 46s
   300K .......... .......... .......... .......... ..........  9% 57.9K 47s
Also, make sure to review our previous multitail article on how to use tail command effectively to view multiple files.

6. Mask User Agent and Display wget like Browser Using wget –user-agent

Some websites can disallow you to download its page by identifying that the user agent is not a browser. So you can mask the user agent by using –user-agent options and show wget like a browser as shown below.
$ wget --user-agent="Mozilla/5.0 (X11; U; Linux i686; en-US; rv: Gecko/2008092416 Firefox/3.0.3" URL-TO-DOWNLOAD

7. Test Download URL Using wget –spider

When you are going to do scheduled download, you should check whether download will happen fine or not at scheduled time. To do so, copy the line exactly from the schedule, and then add –spider option to check.
$ wget --spider DOWNLOAD-URL
If the URL given is correct, it will say
$ wget --spider download-url
Spider mode enabled. Check if remote file exists.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Remote file exists and could contain further links,
but recursion is disabled -- not retrieving.
This ensures that the downloading will get success at the scheduled time. But when you had give a wrong URL, you will get the following error.
$ wget --spider download-url
Spider mode enabled. Check if remote file exists.
HTTP request sent, awaiting response... 404 Not Found
Remote file does not exist -- broken link!!!
You can use the spider option under following scenarios:
  • Check before scheduling a download.
  • Monitoring whether a website is available or not at certain intervals.
  • Check a list of pages from your bookmark, and find out which pages are still exists.

8. Increase Total Number of Retry Attempts Using wget –tries

If the internet connection has problem, and if the download file is large there is a chance of failures in the download. By default wget retries 20 times to make the download successful.
If needed, you can increase retry attempts using –tries option as shown below.
$ wget --tries=75 DOWNLOAD-URL

9. Download Multiple Files / URLs Using Wget -i

First, store all the download files or URLs in a text file as:
$ cat > download-file-list.txt
Next, give the download-file-list.txt as argument to wget using -i option as shown below.
$ wget -i download-file-list.txt

10. Download a Full Website Using wget –mirror

Following is the command line which you want to execute when you want to download a full website and made available for local viewing.
$ wget --mirror -p --convert-links -P ./LOCAL-DIR WEBSITE-URL
  • –mirror : turn on options suitable for mirroring.
  • -p : download all files that are necessary to properly display a given HTML page.
  • –convert-links : after the download, convert the links in document for local viewing.
  • -P ./LOCAL-DIR : save all the files and directories to the specified directory.

11. Reject Certain File Types while Downloading Using wget –reject

You have found a website which is useful, but don’t want to download the images you can specify the following.
$ wget --reject=gif WEBSITE-TO-BE-DOWNLOADED

12. Log messages to a log file instead of stderr Using wget -o

When you wanted the log to be redirected to a log file instead of the terminal.
$ wget -o download.log DOWNLOAD-URL

13. Quit Downloading When it Exceeds Certain Size Using wget -Q

When you want to stop download when it crosses 5 MB you can use the following wget command line.
$ wget -Q5m -i FILE-WHICH-HAS-URLS
Note: This quota will not get effect when you do a download a single URL. That is irrespective of the quota size everything will get downloaded when you specify a single file. This quota is applicable only for recursive downloads.

14. Download Only Certain File Types Using wget -r -A

You can use this under following situations:
  • Download all images from a website
  • Download all videos from a website
  • Download all PDF files from a website
$ wget -r -A.pdf http://url-to-webpage-with-pdfs/

15. FTP Download With wget

You can use wget to perform FTP download as shown below.
Anonymous FTP download using Wget
$ wget ftp-url
FTP download using wget with username and password authentication.
$ wget --ftp-user=USERNAME --ftp-password=PASSWORD DOWNLOAD-URL


Friday, January 15, 2010

Apache umask permission

SkyHi @ Friday, January 15, 2010
[root@home /etc/sysconfig]# cat httpd
# Configuration file for the httpd service.

# The default processing model (MPM) is the process-based
# 'prefork' model.  A thread-based model, 'worker', is also
# available, but does not work with some modules (such as PHP).
# The service must be stopped before changing this variable.

# To pass additional options (for instance, -D definitions) to the
# httpd binary at startup, set OPTIONS here.

# By default, the httpd process is started in the C locale; to
# change the locale in which the server runs, the HTTPD_LANG
# variable can be set.
##May 12 2009
##apache file permission
umask 002

[root@home html]# ls -l /var/www/html/
total 16192
-rw-rw-r--  1 user2 user2      19       Oct 21 13:09 404.html
-rw-rw-r--  1 user2 user2       11976 Jan 13 13:35 About.html

SMTP Error Codes

SkyHi @ Friday, January 15, 2010

Information on bounced messages: NDR Codes


Please note, that each bounce message is different. Therefore, information relating to

addresses, IP numbers, and domain names listed below will differ from what listed in your

header information. If you have received a bounce message (NDR: Non-Delivery Report) that matches one of the

following messages, find the text that follow below for a complete description.

As you examine an NDR message, look out for a three-digit code, for example, 5.2.1.

If the first number begins with 5.y.z, then it means you are dealing with a permanent error; this message will never be delivered. Occasionally, you get NDRs beginning with 4.y.z, in which case there is hope that email will eventually get through. The place to look for the NDR status code is on the very last line of the report.

NDR codes like 5.5.0 or 4.3.1 may remind you of SMTP errors 550 and 431. Indeed, the 500 series in SMTP has a similar meaning to the 5.y.z codes in an NDR - failure. I expect that you have worked out why there are no 2.y.z NDRs? The reason is that the 2.y.z series mean success, whereas Non-Delivery Reports, by definition, are failures.

NDR Classification As you are beginning to appreciate, these status codes are not random. The first number 4.y.z, or 5.y.z refers to the class of code, for example, 5.y.z is permanent error. Incidentally, we have not seen any status codes beginning with 1.y.z, 3.y.z, or indeed any numbers greater than 5.7.z.

The second number x.1.z means subject. This second digit, 1 in the previous example, gives generic information where as the third digit (z) gives detail. Unfortunately, we have not cracked the complete code for the second digit. However, I have discovered a few useful patterns, for instance, 5.1.x indicates a problem with the email address, as apposed to server or connector problem. In addition, 5.2.x means that the email is too big, somewhere there is a message limit.

Conclusion, it is best to look up your three-digit error in the status code, see NDR table below.



[edit] Most Common Bounce Reasons

  • Message rejected: The mailserver used does not match the domain specified
  • 450: Recipient address rejected: Domain not found
  • 450: unknown[] : Client host rejected: Too many mails sent!;
  • 500: Recipient address rejected: Recipient mailbox is full
  • 503: Improper use of SMTP command pipelining
  • 540: : Recipient address rejected: Your email has been returned because the intended recipient's email account has been suspended.
  • 550: User unknown
  • 550:[]: Client host rejected: More than X concurrent connections per subnet
  • 554: unknown[]: Client host rejected: Access denied
  • 554: Client host rejected: cannot find your hostname, [];
  • 554: Sender address rejected: Access denied
  • 554: Service unavailable; [] blocked using, reason: This mail was handled by an open relay - please visit
  • 554: Relay access denied; from=<> to=<> - reject: header Subject: (Email Subject Line)
  • Remote host said: 554 Service unavailable; [XXX.XXX.XXX.XXX] blocked using

[edit] Explanation of various bounce messages

[edit] Message rejected

The mailserver used does not match the domain specified</b>

This message was not delivered because the headers indicate that it is coming from an email service other than the one indicated by the IP address. It is a common practice of Spammers to forge message headers in this manner, therefore this type of message will not be delivered.

For example, a message with a return address of which was actually sent through a service with an IP address which does not match will be rejected for this reason.

[edit] 450: Recipient address rejected: Domain not found

450 unknown[]: Client host rejected: Too many mails sent!;</b>


To protect email system from being flooded by email list administrators, or in some cases by "spammers," ISP's have instituted "rules" to gate incoming email to their systems.

These rules are mostly based on common industry practices as to how to send large amounts of email without causing potential harm to the recipient's mail server.


Unfortunately, from time to time, we are forced to "soft bounce messages because the mailing it was a part of exceeded one or more of the rules for incoming email. However, a "soft bounce" means that we will deliver the email in question, although it will result in a short delay in receipt.


The mail servers that were responsible for delivering the bounced message may have been blocked due to a high number of concurrent connections. This type of abuse is usually indicative of a "spammer" trying to send unsolicited commercial emails. Therefore, the message was blocked along with all other messages from that mail server. This decision is made automatically by computer programs and do result in a significant percentage of false positives. Since the email provider in question has been added to one or more block list(s), the bounced message will continue to be generated.

[edit] 500: Recipient address rejected: Recipient mailbox is full

The email address in question is over its storage limit. For example:

  • Excite mail accounts are currently set up with 125MBs of storage space, or 10,000 emails.
  • The storage limit for a free Yahoo! Mail account is 1GB.

When an email account goes over the quota, emails sent to that account will be

"bounced" back to the sender until the space has been cleared. Unfortunately, we have no

way of knowing if the member in question is aware of the status of their storage space, or

the frequency with which they check their quota.

At ACOR, since many of our subscribers go away for treatment for long periods of time, we receive a lot of those error messages.

[edit] 503: Improper use of SMTP command pipelining


The message was sent in a non-conventional, non-standard way. You should have the

sender of this email contact us if they have further questions.


[edit] 540: < >: Recipient address rejected

Your email has been returned because the intended recipient's email account has been suspended. The account must be

re-activated to receive incoming messages.</b>


As noted often in the Free Email Services Terms of Service, email accounts that have not been accessed

for more than xxxx days may be deactivated.


If you are an Excite Member, you can reactivate your account by simply signing in to your

Email account, where you will be prompted with a message asking if you would like to

reactivate your email account. After agreeing to do so, your email address will automatically

be reactivated. Please note it may take up to two hours for you to begin receiving new

email messages after you've reactivated the account, and remember to access your email

at least once every 60 days to prevent this from occurring in the future.


If you received a "bounced" message to one of the Free Email Services Providers email

user, we suggest that you find an alternate mean of communicating with them to let them

know their email account is now inactive, and that they need to access their Inbox

to reactive their email account.


[edit] 550: User unknown


If the Excite email account in question is generating "bounced" message that state "User

Unknown" when an individual attempts to send email to it, it could mean that the email

account was not activated after the user registered for their Excite Member account.


For the Excite email account to have been activated, the Member would need to have been

signed into their Excite Member account. From there, they would have been prompted to

agree with the Excite email system's Terms of Service and Privacy Policy by selecting the

"Accept and Continue" option. Once this was complete, the Member would need to have

signed into the Excite email account to fully activate it. Furthermore, the account will not

start receiving email until 1 hour after it has been fully activated.


If you are sure that the Excite email account in question has been fully activated, and the

account in question is still generating "User Unknown" bounced messages, please contact

your email administrator, and ask them to contact us at for

further assistance.


550:[]: Client host rejected: More than 20 concurrent connections per subnet


It appears that the mail servers that were responsible for delivering the message to us were

temporarily blocked due to a high number of concurrent connections. This type of abuse is

usually indicative of a "spammer" trying to send unsolicited commercial emails. Therefore,

the message was blocked along with all other messages from that mail server at that time.

However, it is likely that the block was subsequently removed.


554: unknown[]: Client host rejected: Access denied
554: Sender address rejected: Access denied


Your message is being blocked by our site because it either had characteristics similar to

"spam" (unsolicited commercial email), or you're sending your message from an Internet

Service Provider (ISP) or email provider who has been identified as having allowed spam to

be generated from their facilities. For the protection of our members, we do not deliver

bulk, unsolicited emails to Excite email accounts.


For example, it is possible that the subject line of the message in question may have been

identified by our email servers as unsolicited commercial mail (AKA: Spam) and thus was

returned. We ask that you retry sending your message with a different subject line. Please

ensure that the subject line of the message does not contain more than 15 identical



For instance, if a subject line contains 15 identical consecutive characters, such as 15 "a"s

or 15 "!"s it may be rejected on our end.


However, if your ISP or email provider has been added to our block list, your email message

to will continue to be rejected or a bounced message will continue to be

generated. If this is the case, please contact your email administrator, and ask them to

contact us for further assistance.


554: Sender address rejected: Access denied


- This address is listed on our "black list" of email addresses, or is on the black list of one of

the spam filter lists we use to lessen the amount of Spam our users receive. You should

have the owner of this address contact us if they have questions.


554: Service unavailable; [] blocked using, reason: This mail was handled by an open relay - please visit


The message in question was blocked because it came from a domain or IP address that is

currently listed with one of our Spam filters. These filters are utilized to help prevent

excessive spamming to our email providers. If your domain or IP address is listed with any

of the four Spam filters we currently use, your message will not be delivered.


If you believe that you have been unfairly added to any or all of these particular lists,

please contact the website in question:

  • Ordb (
  • Spamcop (
  • Osirusoft (
  • Wirehub (
  • Spamhaus

554: Client host rejected: cannot find your hostname, [];

554: Relay access denied;


This error means that the person sending the email was not authorized to use the email

server (SMTP) server to send email. They should contact their ISP or email provider.


reject: header Subject: (Email Subject Line Here)


This subject line is on our "black list" of subject lines, or is on the black list of one of the

spam filter lists we use to lessen the amount of Spam our users receive. You should have

the sender of this email contact us if they have questions, or ask them, to change the

subject line, which unfortunately appears to have been used as the subject of a "spam" message at some time.


Remote host said: 554 Service unavailable; [XXX.XXX.XXX.XXX] blocked using

We have recently received a large number of abuse complaints regarding your account (and/or your ISP). As a result, we have bounced your email message back to you.

This means your IP number (represented above by XXX.XXX.XXX.XXX) has been blocked due to spamming activity by you, or someone using the same IP address or IP address range from your ISP. You may wish to let your ISP know of the situation. While this blocks are usually temporary, if we see repeated activity from this IP address or IP address range, it may be permanently blocked from sending email to our mail system

We hope this answers your questions about bounced email to our mail system.

Standard SMTP Reply Codes:

211System status, or system help reply.
214Help message.
220Domain service ready.
Ready to start TLS.
221Domain service closing transmission channel.
250OK, queuing for node node started.
Requested mail action okay, completed.
251 OK, no messages waiting for node node.
User not local, will forward to forwardpath.
252 OK, pending messages for node node started.
Cannot VRFY user (e.g., info is not local), but will take message for this user and attempt delivery.
253OK, messages pending messages for node node started.
354Start mail input; end with <CRLF>.<CRLF>.
355Octet-offset is the transaction offset.
421Domain service not available, closing transmission channel.
432A password transition is needed.
450Requested mail action not taken: mailbox unavailable.
ATRN request refused.
451Requested action aborted: local error in processing.
Unable to process ATRN request now
452Requested action not taken: insufficient system storage.
453You have no mail.
454 TLS not available due to temporary reason.
Encryption required for requested authentication mechanism.
458Unable to queue messages for node node.
459Node node not allowed: reason.
500Command not recognized: command.
Syntax error.
501Syntax error, no parameters allowed.
502Command not implemented.
503Bad sequence of commands.
504Command parameter not implemented.
521Machine does not accept mail.
530Must issue a STARTTLS command first.
Encryption required for requested authentication mechanism.
534Authentication mechanism is too weak.
538Encryption required for requested authentication mechanism.
550Requested action not taken: mailbox unavailable.
551User not local; please try forwardpath.
552Requested mail action aborted: exceeded storage allocation.
553Requested action not taken: mailbox name not allowed.
554Transaction failed.
NDR Code Explanation of Non-Delivery Report error codes from Email Servers
4.2.2 The recipient has exceeded their mailbox limit.  It could also be that the delivery directory on the Virtual server has exceeded its limit.
4.3.1 Not enough disk space on the delivery server.  Microsoft say this NDR maybe reported as out-of-memory error.
4.3.2 Classic temporary problem, the Administrator has frozen the queue.
4.4.1 Intermittent network connection.  The server has not yet responded.  Classic temporary problem.  If it persists, you will also a 5.4.x status code error.
4.4.2 The server started to deliver the message but then the connection was broken.
4.4.6 Too many hops.  Most likely, the message is looping.
4.4.7 Problem with a timeout.  Check receiving server connectors.
4.4.9 A DNS problem.  Check your smart host setting on the SMTP connector.  For example, check correct SMTP format. Also, use square brackets in the IP address []  You can get this same NDR error if you have been deleting routing groups.
4.6.5 Multi-language situation.  Your server does not have the correct language code page installed.
5.0.0 SMTP 500 reply code means an unrecognised command.  You get this NDR when you make a typing mistake when you manually try to send email via telnet.
More likely, a routing group error, no routing connector, or no suitable address space in the connector.  (Try adding * in the address space)
This status code is a general error message in Exchange 2000.  In fact Microsoft introduced a service pack to make sure now get a more specific code.
5.1.x Problem with email address.
5.1.0 Often seen with contacts. Check the recipient address.
5.1.1 Another problem with the recipient address.  Possibly the user was moved to another server in Active Di


To download the complete pdf:

Windows 7: File Share, Nonpaged Pool Srv Error 2017

SkyHi @ Friday, January 15, 2010

I’m using my Windows 7 machine as a file server in addition to it being my Media Center. I’m mounting a Samba (smb) share using CIFS from my Linux server so I can synchronize files using rsync. However, I ran into a problem after using the mounted share for a small amount of time. I found a simple solution after a bit of research.
After running rsync for a short amount of time, I discovered that I was getting memory allocation errors related to the Windows share. After unmounting, I attempted to remount the share and received the error:

mount error(12): Cannot allocate memory
Refer to the mount.cifs(8) manual page ( mount.cifs)

After checking the Event Viewer System log, I found the following error:

Source: srv
Event ID: 2017
Level: Error
The server was unable to allocate from the system nonpaged pool because the server reached the configured limit for nonpaged pool allocations.

Some research led me to find this Google Groups discussion about the problem and this Microsoft Technet article discussing the solution (look at the bottom of the page). Apparently you need to tell Windows that you want to use the machine as a file server and that it should allocate resources accordingly. Set the following registry key to ‘1′:

HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management\LargeSystemCache

and set the following registry key to ‘3′:


After making these changes and restarting, I haven’t seen this issue arise again. Fixed!


Thursday, January 14, 2010

7 reasons I switched back to PHP after 2 years on Rails

SkyHi @ Thursday, January 14, 2010

SUMMARY: I spent two years trying to make Rails do something it wasn’t meant to do, then realized my old abandoned language (PHP, in my case) would do just fine if approached with my new Rails-gained wisdom.


Back in January 2005, I announced on the O’Reilly blog that I
was going to completely scrap over 100,000 lines of messy PHP code in
my existing CD Baby ( website, and rewrite the entire thing
in Rails, from scratch.

I hired one of the best Rails programmers in the world (Jeremy
Kemper aka bitsweat), and we set off on this huge task with intensity.
The first few months showed good progress, and Jeremy could not have
been more amazing, twisting the deep inner guts of Rails to make it do
things it was never intended to do.

But at every step, it seemed our needs clashed with Rails’
preferences. (Like trying to turn a train into a boat. It’s
do-able with a lot of glue. But it’s damn hard. And certainly
makes you ask why you’re really doing this.)

Two years (!) later, after various setbacks, we were less than halfway done.*
(To be fair to Jeremy’s mad skillz: many setbacks were because of
tech emergencies that pulled our attention to other internal projects
that were not the rewrite itself.) The entire music distribution world
had changed, and we were still working on the same goddamn rewrite. I
said fuckit, and we abandoned the Rails rewrite. Jeremy took a job with
37 Signals, and that was that.

I didn’t abandon the rewrite IDEA, though. I just asked myself one important question:

“Is there anything Rails can do, that PHP CAN’T do?”

The answer is no.

I threw away 2 years of Rails code, and opened a new empty Subversion respository.

Then in a mere TWO MONTHS, by myself, not even telling
anyone I was doing this, using nothing but vi, and no frameworks, I
rewrote CD Baby from scratch in PHP. Done! Launched! And it works
amazingly well.

It’s the most beautiful PHP I’ve ever written, all wonderfully MVC and DRY, and and I owe it all to Rails.

Inspired by Rails:

*- all logic is coming from the models, one per database table, like Martin Fowler’s Active Record pattern.

*- no requires or includes needed, thanks to __autoload.

*- real MVC separation: controllers have no HTML or business-logic,
and only use REST-approved HTTP. (GET is only get. Any destructive
actions require POST.)

*- all HTML coming from a cute and powerful templating system I
whipped up in 80 lines, all multi-lingual and caching and everything

*- … and much more. In only 12,000 lines of code, including HTML templates. (Down from 90,000, before.)

Though I’m not saying other people should do what I’ve
done, I thought I should share my reasons and lessons-learned, here:



For 2 years, I thought Rails is genius, PHP is shit. Rails is powerful, PHP is crap.

I was nearly killing my company in the name of blindly insisting Rails was the answer to all questions, timeframes be damned.

But when I took a real emotionless non-prejudiced look at it, I realized the language didn’t matter that much.

Ruby is prettier. Rails has nice shortcuts. But no big shortcuts I can’t code-up myself in a day if needed.

Looked at from a real practical point of view, I could do anything in PHP, and there were many business reasons to do so.


By the old plan (ditching all PHP and doing it all in Rails), there was
going to be this One Big Day, where our entire Intranet, Storefront,
Members’ Login Area, and dozens of cron shell scripts were ALL
going to have to change. 85 employees re-trained. All customers and
clients calling up furious that One Big Day, with questions about the
new system.

Instead, I was able to slowly gut the ugly PHP and replace it with beautiful PHP. Launch in stages. No big re-training.


I admire the hell out of the Rails core gang that actually understand
every line inside Rails itself. But I don’t. And I’m sure I
will never use 90% of it.

With my little self-made system, every line is only what’s
absolutely necessary. That makes me extremely happy and comfortable.


One little 2U LAMP server is serving up a ton of traffic damn fast with hardly any load.


I don’t need to adapt my ways to Rails. I tell PHP exactly what I
want to do, the way I want to do it, and it doesn’t complain.

I was having to hack-up Rails with all kinds of plugins and mods to get
it to be the multi-lingual integration to our existing 95-table

My new code was made just for me. The most efficient possible code to work with our exact needs.


Speaking of tastes: tiny but important thing : I love SQL. I dream in queries. I think in tables.

I was always fighting against Rails and its migrations hiding my beloved SQL from me.


Rails was an amazing teacher. I loved it’s “do exactly as I
say” paint-by-numbers framework that taught me some great

I love Ruby for making me really understand OOP. God, Ruby is so beautiful. I love you, Ruby.

But the main reason that any programmer learning any new language
thinks the new language is SO much better than the old one is because
he’s a better programmer now! You look back at your old ugly PHP
code, compared to your new beautiful Ruby code, and think, “God
that PHP is ugly!” But don’t forget you wrote that PHP
years ago and are unfairly discriminating against it now.

It’s not the language (entirely). It’s you, dude. You’re better now. Give yourself some credit.

Ok. All that being said, I’m looking forward to using Rails
some day when I start a brand new project from scratch, with Rails in
mind from the beginning.

But I hope that this reaches someone somewhere thinking, “God
our old code is ugly. If we only threw it all away and did it all over
in Rails, it’d be so much easier!”


php stdin mail help

SkyHi @ Thursday, January 14, 2010
hi all,

need some help with a php script I have.
The php script uses php://stdin to read email and then it processes that email. This script is on a linux server.
I have the server setup so that when email is sent to a certain aliases it pipes the email to my php script, this bit works as the mail report shows me it does ...
The Postfix program

(expanded from ): delivery via local: delivered to
command: /usr/bin/php /var/www/html/thepostoffice.php
So that is great, it works.. that took me a while fiddiling to even get that to work. Anyway so that works however the script does not seem to do anything with the email. I have error logging in the php script but it only logs errors on the mail processing, if php://stdin actually recieved an email. To test that php://stdin is picking up email I put a IF statement in that writes a simple confirmation message to a text file if it has got an email. As per below...

$email = "";
while (!feof($fd)) {
$email .= fread($fd, 1024);

$data = "mail recieved.\n";
$fp = fopen("result.txt", "w");
fwrite($fp, $data);

.however no error is returned to the file. It just seems like nothing happens, as it is not getting the mail as per above. I will paste below the rest of the script, though it is pretty much just processing of the mail, which it cant do with the actual mail

If anyone of you know what im doing wrong or if there is some otherway I need to tell the script it is email then please say as I do not really undestand stdin. I mean, do I need to put another command in the aliases file? It is being delivered using ...
`/usr/bin/php /var/www/html/thepostoffice.php
BTW the ` is a pipe in the aliases file, i just cant type a pipe on my current keyboard .

The whole script ...

//***** database details ********

//*** this function will write to log.log some error details ***
function write_log($mesaj)
$handle = fopen("log.log","w");
fwrite($handle,date("Y-m-d h:i:s").": ".$mesaj."\n");

/* read the mail that is forwarded to the script ***
$fd = fopen("php://stdin", "r");

$email = "";
while (!feof($fd)) {
$email .= fread($fd, 1024);

//*** handle email ***
$lines = explode("\n", $email);

// empty vars
$from = "";
$date = "";
$subject = "";
$message = "";
$splittingheaders = true;

for ($i=0; $i<count($lines); $i++) {
if ($splittingheaders) {

// look out for special headers
if (preg_match("/^Subject: (.*)/", $lines[$i], $matches)) {
$subject = $matches[1];
if (preg_match("/^From: (.*)/", $lines[$i], $matches)) {
//the name exist too in from header
$data = explode('<',$lines[$i]);
$from = substr(trim($data[1]),0,-1);
//only the mail
$from = $matches[1];
if (preg_match("/^Date: (.*)/", $lines[$i], $matches)) {
$date = $matches[1];
} else {
// not a header, but message
$message .= $lines[$i]."\n";

if (trim($lines[$i])=="") {
// empty line, header section has ended
$splittingheaders = false;

//*** make a connection to the database: ***
$handle = mysql_connect($server,$username,$pass) or write_log("Can't connect to the database");
mysql_select_db($database) or write_log("Can't select database");
$when = date("Y-m-d G:i:s");
$data = explode('@',$from);
$username = $data[0];
mysql_query("insert into mails ( `username`, `from`, `subject`, `date`, `message`) values ( '$username', '$from', '$subject', '$when', '$message')") or write_log("Can't execute query");
//mysql_query("insert into mails values('','".$username."','".$from."','".$subject."','".$date."','".$message."')") or write_log("Can't execute query");


Wednesday, January 13, 2010

20/20: Top 20 Programming Lessons I've Learned in 20 Years

SkyHi @ Wednesday, January 13, 2010

I've been programming since I was 11 and I've loved technology and programming every since. There are some hard and easy lessons I've learned over time. As a fellow programmer, you may not have experienced these, but I'm offering them to individuals who are interested in learning more from my experiences.

I'll be updating this as time goes on. I may have more, but in my 20 year period, I don't think there are any additional rules that this list doesn't include. :-)

Here are my most memorable lessons so far.

  1. Set a duration of how long you think it should take to solve a problem - C'mon, admit it! I'm just as guilty as the next programmer. I've seen programmers sit in front of a monitor for eight hours at a time trying to solve a particular problem. Set a time table for yourself of 1 hour, 30 minutes, or even 15 minutes. If you can't figure out a solution to your problem within your time frame, ask for help or research your problem on the Internet instead of trying to be super-coder.
  2. A language is a language is a language - Over time, once you understand how one language works, you'll notice similarities between other languages. The language you choose should provide you with a suitable "comfort" level, the ability to produce efficient (and clean) code, and, above all, allow the language to suit the project and vice-versa.
  3. Don't over-"design pattern" applications - Sometimes it's just easier to write a simple algorithm than it is to incorporate a singleton or facade pattern. For the most part, it even allows for cleaner, understandable code. :-)
  4. Always backup your code - I've experienced a complete hard drive failue and lost a lot of code when I was younger and felt horrible because of what had happened. The one time you don't back up your data may be the one time where you have a strict deadline with a client and they need it tomorrow. Source code/version control applies here as well.
  5. You are not the best at programming. Live with it. - I always thought that I knew so much about programming, but there is always someone out there better than you. Always. Learn from them.
  6. Learn to learn more - With number five explained, I've always had a magazine or book in my hand about computers or programming (ask my friends, they'll confirm). True, there is a lot of technology out there and keeping up with it is a fulltime job, but if you have a smart way of receiving your news, you'll learn about new technology every single day.
  7. Change is constant - Your knowledge of technology and/or programming should be similar to how you treat stocks: Diversify. Don't get too comfortable with a particular technology. If there's not enough support for that language or technology, you might as well start updating your resume now and start your training period. My general rule of thumb that has kept me going? Know at least two or three languages, so if one dies off, you have another one to fall back on while you train for a new technology.
  8. Support Junior - Assist and train the junior/entry-level developers on good programming guidelines and techniques. You never may move up in rank and you'll feel more confident having personally trained and prepared them for their next position.
  9. Simplify the algorithm - Code like a fiend, but once you're done, go back through your code and optimize it. A little code improvement here and there will make support happier in the long run.
  10. Document your code - Whether its documenting a Web Service API or documenting a simple class, document as you go. I've been accused of over-commenting my code and that's something I'm proud of. It only takes a second to add an additional comment line for each 3 lines of code. If it's a harder technique to grasp, don't be afraid to over-comment. This is one problem most architects, backup coders, and support groups don't complain about if you've done your job right.
  11. Test, Test, Test - I'm a fan of Black Box Testing. When your routine is finished, your "stamp of approval" period starts. If you have a Quality Assurance department, you may be talking more to them than your project manager regarding errors in your code. If you don't test your code thoroughly, you may develop more than code. Possibly a bad reputation.
  12. Celebrate every success - I've met a lot of programmers who have conquered headache-style problems with a great programming technique and celebrated with a fellow programmer by doing the "shake", the high-five, or even a "happy dance." Everyone has enlightening periods in their life and even though that one happy coder asked you to come and see his extraordinary piece of code and you've seen that one piece of code over 100 times in your experiences, celebrate the success of a fellow developer for the 101-st time.
  13. Have Code Reviews Frequently - On projects and personally. In the company, you will always have code reviews of how well you coded something. Don't look at it as people crucifying your coding style. Think of it as constructive criticism. On the personal front, review your code and always ask, "How could I have done it better?" This will accelerate your learning and make you a better programmer.
  14. Reminisce about your code - There are two ways to looking at old code: "I can't believe I wrote this code" and "I can't believe I wrote this code." The first statement is often of disgust and wondering how you can improve it. You'd be surprised at how old code can be resurrected into a possible and better routine, or maybe even an entire product. The second statement is of amazement and achievement. Developers have their one or two project code achievements that they completed and had everyone standing up and taking notice. Again, based on your excellent coding ability, you could take those past routines or projects and update them into a better product or idea.
  15. Humor is necessary - In my 20 years of development, I have never met a programmer who hasn't had a decent sense of humor. Actually, in this industry, it's a requirement.
  16. Beware the know-it-all, possessive coder, and the inexperienced coder - Humble yourself when you meet these types of coders. The know-it-all tries to upstage you instead of working as a team player, the defensive coder created code that he doesn't want to share with anyone, and the inexperienced coder constantly asks for assistance every ten minutes where the finished code developed is yours, not theirs.
  17. No project is ever simple - I've been asked by friends, family, and associates to just "whip something up for me." To "whip" up a program or web site, it takes planning from both parties to complete something that both sides can appreciate. If someone needs a 3-page web site with Microsoft Access from the start, it winds up becoming a 15-page web site with SQL Server, a forum, and a custom CMS (Content Management System).
  18. Never take anything for granted - If you take on a simple project, you may think that a certain section will be easy to complete. Don't think that even for a moment. Unless you have a class, component, or piece of code already coded...and has been tested thoroughly...and is in production from an existing project, don't think it will be easy.
  19. Software is never finished - A fellow programmer once told me that software is never finished, it's "temporarily completed." Sound advice. If the client is still using a program you wrote and has stood the test of time, chances are, you are still updating it, which isn't a bad thing. It keeps you working. :-)
  20. Patience is definitely a virtue - When clients, friends, or family members use a PC, they get frustrated and proceed to hit a component of the PC or storm off. I keep telling everyone, "you are controlling the computer not the other way around." You need to have a certain level of patience for programming computers. As soon as programmers understand what they did wrong, they look at it from the computers point of view and say, "Oh, that's why it was doing that."

I hope this list of lessons learned have either inspired or provided a chuckle for some people.


I’m Working 12 Hours a Day. Here are 5 ways I’m Getting Through It.

SkyHi @ Wednesday, January 13, 2010

I have reached the crux – the point between leaving my day job and starting my company full time. In order for the transition to be smooth, I have decided to jump into my business before I leave my company. This means that I am taking on clients and working several hours after work in the evenings to manage everything.

Unfortunately, it means sitting 11-12 hours a day in front of a computer. I am no stranger to hard work, and know exactly how to cope with this kind of situation. I am posting to share my advice on any of you that are in similar prediciments.

Firstly, I understand this is a purely temporary situation and that once I start my business I will spend less time in front of the PC and more time actually interacting with people. But for now, here is what I am doing:

  1. My diet has changed recently to a pure brain-food one. I have completely eliminated all unnatural foods and sugars from my diet for the time being, and primarily live on fresh vegetables and fruits. This means no coffee, no fast food and no processed food. I’m staying away from pretty much anything thats not directly from the ground. I am also taking several brain helper pills to increase the uptake of seratonin, improve concentration and memory and keep me feeling alert 24/7. I’m also on flax seed oil – nature’s highest source of Omega 3’s which are vital for brain function.
  2. I meditate daily. This calms me down and helps me focus. I use breathing techniques, and focus on being mindful and present. I sit in a quiet spot, close my eyes for 15 minutes and forget about the past or the future for a while. I direct my focus towards all the areas inside and part of my body, including my emotions and thoughts inside my head. All the while I take slow, deep belly breaths to stay relaxed.
  3. I take 2-3 minute breaks every 30 minutes, getting up to stretch and relax my eyes. I even do some Tai Chi moves now and then to zing my body up.
  4. I exercise whenever I can. Due to time restrictions, this usually means squeezing in a few laps in the pool during lunch, or doing 30 pushups next to my desk whenever I get the urge. This stimulates both my body and mind, and makes my brain much more alert from the influx of oxygen.
  5. I focus on the task at hand. This is key and I keep asking myself, what is the most important thing I can be doing right now? So whether its blogging or doing work for clients in the evenings, I make sure to stay away from Twitter and Facebook unless I have a business reason to do so.

I will let you guys know how I am progressing, but with these 5 steps that I ‘m taking I literally feel like I can do anything! What are some of your secrets for combating work overload?


Easy Firewall Generator for IPTables GUI

SkyHi @ Wednesday, January 13, 2010
This program generates an iptables firewall script for use with the 2.4 or later linux kernel. It is intended for use on a single system connected to the Internet or a gateway system for a private, internal network. It provides a range of options, but is not intended to cover every possible situation. Make sure you understand what each option in the generator does and take the time to read the comments in the resulting firewall. This generator will not, for example, generate a firewall suitable for use with a DMZ, but it can provide a starting point. For the most common uses the generator should produce a firewall ready for use.

Read here for more information on iptables firewalls.

Easy Firewall Generator implements several ideas presented in Oskar Andreasson's iptables-tutorial. The link to his tutorial is maintained on the resources page below.

Links to additional firewall resources.

Select the desired options and click the Generate Firewall! button. If your choices require additional input, the Options will redisplay, perhaps with more options displayed. When the options are in a completed state the firewall will be returned as a text document. Save the result as iptables for redhat systems or rc.firewall for many others.

Internet Interface: Help

Select Type of Internet Address Help
Static Internet IP Address
Dynamic Internet IP Address

Single System or Private Network Gateway? Help
Single System

Allow Inbound Services Help

Log entries in a Fireparse format? Help

Do you use Internet Relay Chat (IRC)? Help


The .forward File

SkyHi @ Wednesday, January 13, 2010
The alias database should only be writable by root. However, individual users can also use aliasing by means of the .forward file.

When delivering email to a user, sendmail checks to see if the user has a .forward file in his home directory. If the file is present, the contents are read and treated as an alias for that person's email. So, for example, if I created a .forward file with the line, all my email would be forwarded to that address instead of delivered to my mailbox.

The .forward file can have anything in it that the right-hand side of an alias can. You can specify another user, a filename, or a program to pipe it to. You can also seperate multiple entries with a comma or a newline. However, since alias expansion is done recursively until there's nothing left to expand, you have to do something different if you're sending a copy of the mail to yourself. For example, say user jsmith wants to send a copy of all her mail to user johndoe, but still keep a copy for herself. You'd think to use this .forward file:

The problem is that when sendmail goes to deliver a copy of the mail to jsmith, the .forward file will be expanded again, resulting in a mail loop! To avoid this, sendmail allows you to put a \ character before an entry in a .forward file which tells it not to expand that entry if it's an alias. So, the above file would be:
\jsmith<br />johndoe<br />
Now, jsmith's copy will get put directly into her mailbox without expanding the alias again.