Saturday, December 11, 2010

Master Multiple Firefox Profiles To Make Yourself More Productive

SkyHi @ Saturday, December 11, 2010
One of the most powerful features in Firefox is one too few people know about: the ability to create and use more than one profile at the same time. Here’s how to make yourself more productive with multiple profiles.
Instead of installing every single extension for every task into the same Firefox profile, why not separate them into separate profiles, organised by task? Think of Firefox like an Operating System for the web, and each profile as a separate application — one profile is used for web browsing, another for writing, another for web development, and so on.

Setting Up Separate Firefox Profiles

Creating a new profile is a lot easier than you might think, but there’s no menu item that allows you to open the profile manager easily — you’ll need to pop open a command prompt, switch to the Firefox directory, and then launch Firefox with a command line switch:
firefox -profilemanager -no-remote
If you don’t like using the command prompt, you can simply create a shortcut to Firefox.exe (or copy your current one), and then add the arguments to the end of the Target line.
The first argument clearly indicates that you want the profile manager, but the second –no-remote argument means that you want to open a separate copy of Firefox at the same time as having the first one open. This is the magic switch that will allow you to run more than one profile at the same time, and it also allows you to open the profile manager without closing your current browser window.
Once you’ve created a new profile, you can make a separate Firefox shortcut, and modify the Target line to include a few extra arguments, making it look something like this:
firefox.exe –P profilename –no-remote

1. Windows Button + R and type firefox -profilemanager -no-remote
2. Create a profile and Create a Firefox shortcut
3. Right Click the shortcut and under Shortcut Tab add the following command to Target area:
"C:\Program Files (x86)\Mozilla Firefox\firefox.exe" -P webdev -no-remote 

Switching Profiles the Easy Way

If you’d rather not mess with command line arguments and new shortcuts, you can switch between profiles or launch the profile manager the easy way with the previously mentioned ProfileSwitcher extension.
Once you’ve installed the extension, you can simply use the File menu to access the new profile switching options and access the profile manager directly from there. Once you’ve chosen one of the other profiles, you’ll be prompted on whether you want to switch to that profile or keep the current profile open and open the second profile at the same time.

Sync Your Profile Passwords and Bookmarks

The biggest problem with maintaining multiple profiles is utilising the same passwords, bookmarks and settings across the various profiles, and previously mentioned Weave can do just that. Once you’ve setup Weave in each profile, you can sync your bookmarks, passwords, history, preferences, and even your open tabs if you felt like it. Weave isn’t the only game in town, though, as you can use Xmarks to sync your bookmarks and passwords across other browsers as well, but it comes with additional “discovery” services that you might want to turn off.

Create Profiles for Specific Tasks

Now that we’ve learned how to create profiles, switch between them, and sync your passwords across them all, it’s time to actually start creating useful profiles to separate out your tasks. Here’s a few suggested ideas for the profiles I use on a regular basis, but you aren’t limited to these ideas — you can more or less come up with a specialised profile for any task.
The Writing Profile
If you’re doing writing on the web, you’ve no doubt realised that it’s far too easy to get distracted by your other tabs, click on your bookmarks, or just type something into the search box. What I’ve done to keep myself focused on writing is create a separate profile specifically for writing, with almost all of the chrome elements removed to maximise the viewing area for writing and prevent myself from being distracted.
The Web Development Profile
If you spend any percentage of your time doing web design or development, Firebug is the single must-have extension that you cannot live without. It’s not just that it lets you debug Javascript, but it allows you to edit the page HTML and CSS on the fly, and even has extensions like YSlow and Page Speed to help keep your website nice and speedy. Since you really wouldn’t want to have the hefty Firebug extension running all the time, it’s best to create a new profile specifically for web development.
The Social Media Profile
Let’s face it, social media can be a massive time suck, especially when you’re browsing around finding random stuff using StumbleUpon-so what I do is create a separate profile for random browsing and social media. This helps keep me from clicking that Stumble button while I’m supposed to be working, but also helps me separate my time. Your social media needs may differ, but the point is to keep them separate from your primary profile to keep yourself focused on a single task.
The Extension Testing Profile
Rather than install every new extension and bloat up your primary Firefox profile, why not create a separate profile specifically for testing new extensions? This way you can make certain that you’ve fully vetted an extension before unleashing it on your main web browsing experience. Here at Lifehacker HQ, we make extensive use of test profiles when we’re checking out extensions to recommend.
Secure Banking Profile
Why expose yourself by using your primary profile to access your banking sites? What I do is setup a clean banking profile with every plugin disabled, the NoScript and Adblock Plus extension to remove any danger of malware, and only my banking sites allowed to use JavaScript. I even go so far as to remove the search box to make sure I’m not using this profile for any quick Google lookups, and I create bookmarks for each of my banking sites, and make sure to never click a banking link in an email, since that’s a quick way to get yourself scammed online.
What about you? Do you take advantage of multiple profiles? What type of tasks do you break out into their own profile? Share your thoughts in the comments.


Tuesday, December 7, 2010

Let's make the web faster Tools and downloads

SkyHi @ Tuesday, December 07, 2010

From Google

Web page analysis

Page Speed -
Open source Firefox/Firebug Add-on that evaluates the performance of web pages and gives suggestions for improvement.

Chrome Developer Tools -
Tools included in Google Chrome that let you edit, debug, and monitor CSS, HTML, and JavaScript live in any web page. You can also use them to optimize web page performance by profiling CPU and memory usage.

Speed Tracer -
Google Chrome extension that helps you debug performance problems with AJAX applications.

Resource Optimization

Closure Compiler -
Optimize the speed and size of your JavaScript.

WebP -
Reduce image file sizes and download times.

Development tools

Closure Tools -
Use the Closure Compiler, Closure Library, and Closure Templates to build rich web applications with JavaScript that is faster, more powerful, and more optimized.

Google Web Toolkit -
Toolkit that allows you to build rich web applications in Java, and then compile into highly optimized JavaScript.

From other developers


Cuzillion -
Tool for quickly constructing web pages to see how components interact and how behavior differs across browsers, sometimes in unexpected ways. Also lets you share sample pages with others.

Hammerhead -
Firebug Add-on for measuring the load time of web pages.

Development environment and framework for creating fast, reusable CSS objects and modules.

Performance benchmarking

httperf -
Tool for generating HTTP workloads and measuring web server performance, and constructing micro- and macro-level benchmarks. -
Provides a personalized Ajax dashboard interface, checks server performance and availability, generates uptime reports, tracks visitors, checks CPU, memory and other systems resources, and alerts when it detects abnormalities.

Pylot -
Open source tool for testing the performance and scalability of web services. It runs HTTP load tests, verifies server responses, and produces reports with metrics.

Wbox -
Performs various tests, including page load benchmarking, web server and web application stress testing, and verifies correct configuration of virtual domains configuration, redirects, and HTTP compression.

JavaScript profiling

dynaTrace AJAX -
Full tracing analysis of Internet Explorer 6-8 (including JavaScript, rendering, and network traffic). (Related blogpost)

PHP profiling

Xdebug -
Extension for PHP that provides profiling and code coverage analysis, as well as debugging information including stack and function traces, and memory allocation.

XHProf by Facebook -
Instrumentation-based hierarchical profiler for PHP.

Resource optimization

CSS Sprite Generator -
Generates a CSS sprite out of a number of images.

JSLint -
Tool that looks for code quality problems in JavaScript programs.

JSMin -
Filter which removes comments and unnecessary whitespace from JavaScript files.

Smush It -
Online tool that allows you to upload images for lossless compression and optimization. Provides a report of bytes saved and downloads a zip file containing the optimized versions of the files.

SpriteMe! -
Tool that determines background images to sprite, groups and sprites them, and generates resultant modified CSS.

YUI Compressor -
JavaScript minifier designed to yield a higher compression ratio than other tools.

Web debugging

Fiddler 2 -
Web debugging proxy which logs all HTTP/S traffic between your computer and the Internet. Inspect HTTP/S traffic, set breakpoints, and "fiddle" with incoming or outgoing data.

Firebug -
Firefox Add-on that lets you edit, debug, and monitor CSS, HTML, and JavaScript live in any web page.

HttpWatch -
HTTP viewer and debugger integrated with IE and Firefox to provide HTTP/S monitoring without leaving the browser window.

Web page analysis

AOL Page Test -
Open source tool for measuring and analyzing web page performance using Internet Explorer.

BrowserMob -
Tool for website performance monitoring and alerting.

IBM Page Detailer -
Graphical tool that assesses web page performance and provides details include the timing, size, and identity of each item in a page.

IntroSpectrum -
Web-based performance monitor which simulates users using actual web browsers.

Microsoft VRTA -
Tool that visualizes web page download, identifies areas for performance improvements, and recommends solutions.

MySpace Performance Tracker -
Internet Explorer browser plugin that helps improve web page performance by capturing and measuring possible bottlenecks.

WebPagetest -
Tool that provides a waterfall of your page load performance as well as a comparison against an optimization checklist.

Yahoo! YSlow -
Firefox/Firebug Add-on that analyzes web pages and suggests ways to improve their performance, based on a set of rules for high performance web pages.


Proftpd limit bandwidth to my PROFTPD clients

SkyHi @ Tuesday, December 07, 2010
For the benefit of others here is the solution which I worked out by experimenting.

Its very simple really.

I added the following 2 lines in the proftpd.config file:

TransferRate RETR 15.0

TransferRate STOR 55.0

This limits the download speed FROM my server to 15 Kbytes/sec

and the upload TO my server to 55 Kbytes/sec

Thats it. It works

Stopping and Starting ProFTPD(compiled)


kill -TERM `cat /usr/local/var/`

To avoid this,
perform a syntax check of the file before sending the signal:

proftpd -t -d5


Layer 4 vs Layer 7 DoS Attack

SkyHi @ Tuesday, December 07, 2010
Not all DoS (Denial of Service) attacks are the same. While the end result is to consume as much - hopefully all - of a server or site's resources such that legitimate users are denied service (hence the name) there is a subtle difference in how these attacks are perpetrated that makes one easier to stop than the other.

SYN Flood
A Layer 4 DoS attack is often referred to as a SYN flood. It works at the transport protocol (TCP) layer. A TCP connection is established in what is known as a 3-way handshake. The client sends a SYN packet, the server responds with a SYN ACK, and the client responds to that with an ACK. After the "three-way handshake" is complete, the TCP connection is considered established. It is as this point that applications begin sending data using a Layer 7 or application layer protocol, such as HTTP.
A SYN flood uses the inherent patience of the TCP stack to overwhelm a server by sending a flood of SYN packets and then ignoring the SYN ACKs returned by the server. This causes the server to use up resources waiting a configured amount of time for the anticipated ACK that should come from a legitimate client. Because web and application servers are limited in the number of concurrent TCP connections they can have open, if an attacker sends enough SYN packets to a server it can easily chew through the allowed number of TCP connections, thus preventing legitimate requests from being answered by the server.
SYN floods are fairly easy for proxy-based application delivery and security products to detect. Because they proxy connections for the servers, and are generally hardware-based with a much higher TCP connection limit, the proxy-based solution can handle the high volume of connections without becoming overwhelmed. Because the proxy-based solution is usually terminating the TCP connection (i.e. it is the "endpoint" of the connection) it will not pass the connection to the server until it has completed the 3-way handshake. Thus, a SYN flood is stopped at the proxy and legitimate connections are passed on to the server with alacrity.
The attackers are generally stopped from flooding the network through the use of SYN cookies.  SYN cookies utilize cryptographic hashing and are therefore computationally expensive, making it desirable to allow a proxy/delivery solution with hardware accelerated cryptographic capabilities handle this type of security measure. Servers can implement SYN cookies, but the additional burden placed on the server alleviates much of the gains achieved by preventing SYN floods and often results in available, but unacceptably slow performing servers and sites.

A Layer 7 DoS attack is a different beast and it's more difficult to detect. A Layer 7 DoS attack is often perpetrated through the use of HTTP GET. This means that the 3-way TCP handshake has been completed, thus fooling devices and solutions which are only examining layer 4 and TCP communications. The attacker looks like a legitimate connection, and is therefore passed on to the web or application server.
At that point the attacker begins requesting large numbers of files/objects using HTTP GET. They are generally legitimate requests, there are just a lot of them. So many, in fact, that the server quickly becomes focused on responding to those requests and has a hard time responding to new, legitimate requests.
When rate-limiting was used to stop this type of attack, the bad guys moved to using a distributed system of bots (zombies) to ensure that the requests (attack) was coming from myriad IP addresses and was therefore not only more difficult to detect, but more difficult to stop. The attacker uses malware and trojans to deposit a bot on servers and clients, and then remotely includes them in his attack by instructing the bots to request a list of objects from a specific site or server. The attacker might not use bots, but instead might gather enough evil friends to launch an attack against a site that has annoyed them for some reason.
Layer 7 DoS attacks are more difficult to detect because the TCP connection is valid and so are the requests. The trick is to realize when there are multiple clients requesting large numbers of objects at the same time and to recognize that it is, in fact, an attack. This is tricky because there may very well be legitimate requests mixed in with the attack, which means a "deny all" philosophy will result in the very situation the attackers are trying to force: a denial of service.
Defending against Layer 7 DoS attacks usually involves some sort of rate-shaping algorithm that watches clients and ensures that they request no more than a configurable number of objects per time period, usually measured in seconds or minutes. If the client requests more than the configurable number, the client's IP address is blacklisted for a specified time period and subsequent requests are denied until the address has been freed from the blacklist.
Because this can still affect legitimate users, layer 7 firewall (application firewall) vendors are working on ways to get smarter about stopping layer 7 DoS attacks without affecting legitimate clients. It is a subtle dance and requires a bit more understanding of the application and its flow, but if implemented correctly it can improve the ability of such devices to detect and prevent layer 7 DoS attacks from reaching web and application servers and taking a site down.
The goal of deploying an application firewall or proxy-based application delivery solution is to ensure the fast and secure delivery of an application. By preventing both layer 4 and layer 7 DoS attacks, such solutions allow servers to continue serving up applications without a degradation in performance caused by dealing with layer 4 or layer 7 attacks.


Tips for Securely Using Temporary Files in Linux Scripts

SkyHi @ Tuesday, December 07, 2010
Over the years, I've written hundreds, if not thousands, of shell scripts. With the ease at which you can redirect input and output within a shell script, many sysadmins store data in temporary files for processing purposes. In some situations scripts become essential to the day-to-day operations of a system and as such, may end up running on a regular basis via crontab – never to be looked at again.
Unfortunately, some sysadmins who write scripts might store sensitive data in temporary files, don't restrict access to temporary files, and might forget to remove them from the system when they are no longer needed. In many cases, they use them when it isn't even necessary. The beauty of Linux and UNIX is that there are hundreds of ways to accomplish the same task. I will keep my Bash examples simple so you can focus on grasping the general concepts.

Restrict access to temporary files

This is the most commonly forgotten step. If you are like most sysadmins who write temporary files to /tmp or /var/tmp, set your umask before file creation.
# cut -f1 -d: /etc/passwd > /tmp/test
# ls -l /tmp/test
-rw-r--r-- 1 root root 207 Dec  6 11:56 /tmp/test

# rm -f /tmp/test
Now, let's set the umask, create the file again, and check its access controls:
# umask 077
# cut -f1 -d: /etc/passwd > /tmp/test
# ls -l /tmp/test
-rw------- 1 root root 207 Dec  6 11:58 /tmp/test

# rm -f /tmp/test
As you can see, the more restrictive umask only grants the file owner read and write permission. Additionally, instead of writing temporary files to /tmp or /var/tmp, write the files to a dedicated, private area such as one under the user account's home directory. Limit access to this directory with permissions such as 0700.

Use a random string as the filename

To reduce the likelihood that someone knows the exact name of the temporary file your script creates, avoid using file name prefixes and use random characters as the filename. For example:
# umask 077
# tempfile=$(head -c 12 /dev/urandom |mimencode  |tr -d "/")
# echo $tempfile

# cut -f1 -d: /etc/passwd > /tmp/$tempfile
# ls -l /tmp/$tempfile
-rw------- 1 root root 207 Dec  6 12:10 /tmp/wwiboOPRHbozVuce

# rm -f /tmp/$tempfile

Don't use temporary files at all

Of course, the safest method is to do in-line processing using pipes, a subshell environment, or a variable. For example, if you wanted an alphabetical listing of user accounts simply use a pipe:
# cut -f1 -d: /etc/passwd | sort -u
or even:
# sort -u -t: -k1 /etc/passwd |cut -f1 -d:
If you wanted to perform an action on each account on the system, invoke a subshell on a for...loop such as:
for user in $(cut -f1 -d: /etc/passwd)
    printf "some action on %s\n" $user


In summary, take a look at your /var/tmp and /tmp directories. Do they have a bunch of strange files which are open to the world? Take inventory of all of the scripts running on your system especially those which are executed regularly via crontab. Make sure you clearly understand what they are doing and if they are creating any temporary files. If they do, try some of the aforementioned tips to help secure them.


Proxy and reverse proxy servers

SkyHi @ Tuesday, December 07, 2010

Proxy servers

A proxy server is a machine which acts as an intermediary between the computers of a local area network (sometimes using protocols other than TCP/IP) and the Internet
Most of the time the proxy server is used for the web, and when it is, it's an HTTP proxy. However, there can be proxy servers for every application protocol (FTP, etc.).
diagram of an intranet with a proxy server

The operating principle of a proxy server

The basic operating principle of a proxy server is quite simple: It is server which acts as a "proxy" for an application by making a request on the Internet in its stead. This way, whenever a user connects to the Internet using a client application configured to use a proxy server, the application will first connect to the proxy server and give it its request. The proxy server then connects to the server which the client application wants to connect to and sends that server the request. Next, the server gives its reply to the proxy, which then finally sends it to the application client
how a proxy server works

Features of a proxy server

Nowadays, by using TCP/IP within local area networks, the relaying role that the proxy server plays is handled directly by gateways and routers. However, proxy servers are still being used, as they have some other features.


Most proxies have a cache, the ability to keep pages commonly visited by users in memory (or "in cache"), so they can provide them as quickly as possible. Indeed, the term "cache" is used often in computer science to refer to a temporary data storage space (also sometimes called a "buffer.")
A proxy server with the ability to cache information is generally called a "proxy-cache server".
The feature, implemented on some proxy servers, is used both to reduce Internet bandwidth use and to reduce document loading time for users.
Nevertheless, to achieve this, the proxy must compare the data it stores in cached memory with the remote data on a regular basis, in order to ensure that the cached data is still valid.


What's more, by using a proxy server, connections can be tracked by creating logs for systematically recording user queries when they request connections to the Internet
Because of this, Internet connections can be filtered, by analysing both client requests and server replies. When filtering is done by comparing a client's request to a list of authorised requests, this is called whitelisting, and when it's done with a list of forbidden sites, it's called blacklisting. Finally, analysing server replies that comply with a list of criteria (such as keywords) is called content filtering.


As a proxy is an indispensable intermediary tool for internal network users who want to access external resources, it can sometimes be used to authenticate users, meaning to ask them to identify themselves, such as with a username and password. It is also easy to grant access to external resources only to individuals authorised to do so, and to record each use of external resources in log files.
This type of mechanism, when implemented, obviously raises many issues related to individual liberties and personal rights.

Reverse-proxy servers

A reverse-proxy is a "backwards" proxy-cache server; it's a proxy server that, rather than allowing internal users to access the Internet, lets Internet users indirectly access certain internal servers.
reverse-proxy server diagram
The reverse-proxy server is used as an intermediary by Internet users who want to access an internal website, by sending it requests indirectly. With a reverse-proxy, the web server is protected from direct outside attacks, which increases the internal network's strength. What's more, a reverse-proxy's cache function can lower the workload if the server it is assigned to, and for this reason is sometimes called a server accelerator.
Finally, with perfected algorithms, the reverse-proxy can distribute the workload by redirecting requests to other, similar servers; this process is called load balancing.

Setting up a proxy server

The most widely used proxy, without a doubt, is Squid, a free software program available for several platforms, including Windows and Linux.
In Windows, there are several programs for setting up a local area network proxy server at a low cost:
  • Wingate is the most common solution (but isn't free of charge)
  • Configuring a proxy with Jana server is becoming more and more common
  • Windows 2000 includes Microsoft Proxy Server (MSP), which works with Microsoft Proxy Client 


how to read email headers

SkyHi @ Tuesday, December 07, 2010
As an email administrator you will be challenged with creating filters, blacklist, whitelist, and/or redirects that require an understanding of email headers. In their simplest form email headers are read by an email client to display the To, From, Date, and Subject of a message.

Message headers (From: and To:) differ from senders and recipients and I'd like to outline those differences so when you are confronted with creating filters you don't end up pulling your hair out trying to figure out why a simple filter does not work. :)
  1. Why doesn't my filter work?
  2. Headers and SMTP Envelope
  3. Read Email Headers
  4. Senders and Recipients
Why doesn't my filter work?

The most common problem I see when asked why a filter does not work is in the differences between the From: header and the sender. Here's what usually happens. Worker receives an email they no longer want to receive from a certain email address. They ask you to block email address So you create a filter to reject The next day rolls around and your filter is not working according to the worker. Why? The email reader (Outlook, Thunderbird, etc...) reads the From: header and not the Sender and the mail server filter you created is likely triggered on the sender address. Lets explain how this is possible.

Headers and SMTP Envelope.

A good analogy of what is happening is in a written letter that is addressed to you on an envelope, yes I'm talking snail mail here..., but the letter inside the envelope is addressed to and from someone else. When an email is received by your mail server it takes the letter out of the envelope and puts it in your inbox. Your email address may not be visible in the To: header and the original sender (or return address) may not be the same as the From: header. This is all perfectly legal in the email world.

The SMTP envelope of the message will always contain the actual sender and recipient(s) of a message and you can view this from the SMTP logs of your mail server.
The From: and To: headers are sent during the DATA command of the SMTP session and is viewed by the email reader (Outlook, Thunderbird, etc...) when the message is opened.

Since SMTP has this ability it's able do things like mailing list and BCC. A mailing list will typically put the mailing list email address in the To: header, yet it still arrives in your inbox without you seeing your email address in the To: header. Probably the best example is BCC. When you BCC someone the BCC recipient is not included in the headers of the message only as a recipient. And this is all controlled by your email reader once you press the Send button.

Reading Email Headers.

After you press the Send button in your email client it has to create the message, and decide who the sender and recipients are. The creation of the message includes email headers, the body of the message, and any attachments. I'm not getting into how attachments are created as it involves explaining mime boundary headers and that's out of the scope of this article.

Email Header example:

Subject: Weekly Report Update
Date: Fri, 01 May 2009 10:08:12 -0400
X-headers: Optional information - Such as, Thunderbird 2.x

Body of message

The above example is in it's simplest form, you will see many other headers in an email message, all of which are usually self explanatory.

Note, the "." on a line by itself just below the 'Body of the message'. The "." on a line by itself is only needed during the SMTP session to tell the mail server that it's received all the DATA and can save the message for delivery. When the mail server saves a message in your inbox it may not include the "." as it's not required by an email client to read the message.

The above example only shows what an email client will create after sending a message. The mail server will also add headers to the message or can modify headers as needed. Here's an example after a message passed from an email client through two mail servers.

Received: From ( to
Received: From Email client ( to local Mail Server
Subject: Weekly Report Update
Date: Fri, 01 May 2009 10:08:12 -0400
MIME-Version: 1.0
X-headers: Optional information - Such as, Thunderbird 2.x

Body of message

Each time a message is passed from one mail server to another a Received: header is added to the top of the message identifying the mail server that delivered the message. Some email clients do not show the received header when you use the option to view headers. Ideally to see all headers you should view the message file on the mail server. Use a text editor to view the message.

When creating filters you can parse Received: headers but only if the Received: header exist, meaning it's already passed through one mail server. I've seen cases where mail admins try to filter on the Received: header of itself and usually that's not possible.

Message Header Formats.

The format of the email headers are critical. The order of the headers are not critical, other than the Received: header, which is always placed at the top of the message by the last mail server that delivered it.

In order for an email header to be consider an email header it must have a colon at the end and it must be before the message body. The header section of the message will always have one header on each line. A blank line or enter starts the body of the message.

Senders and Recipients.

The SMTP protocol has some flexibility that is not always obvious on the surface. Always consider the sender, recipient, From:, and To: headers when creating filters and you should save yourself some troubleshooting steps later on.


Comparing the POP and IMAP Protocols

SkyHi @ Tuesday, December 07, 2010
In the old days of the Internet, there were a few large UNIX (or other multiuser time sharing) machines that were always connected. Your e-mail got delivered to a mailbox on whichever machine it was that you typically used, and when you logged in you could run e-mail client software directly on the local mailbox. Then gradually, the old methods gave way to the new upstarts. Many PCs and Macs, for example, were not even running UNIX, which represented remote users that were only infrequently connected. This required some rethinking of how e-mail retrieval worked; and new protocols were created, including the Post Office Protocol (POP) and the Internet Message Access Protocol (IMAP). POP and IMAP are protocols for receiving mail from a server for reading. Sending outgoing mail still requires your e-mail client to use SMTP to talk to other mail servers. For more information on SMTP, refer to Linux related article 13.

Comparing the POP and IMAP Protocols

The Post Office Protocol, or POP for short, was the first widespread attempt to solve remote e-mail access, and is still in common use today. For users on machines that either aren't capable of running a full Simple Mail Transfer Protocol (SMTP) server, or are not permanently connected, a “Post Office” machine is used. The Post Office machine is connected to the Internet full time and receives e-mail on behalf of its users via SMTP. E-mail is delivered to a local mailbox on the Post Office machine just as if it was the user's login machine under the old model. Sometime later the user connects from her workstation and the user's e-mail client will contact the POP server on the Post Office machine and transfer any waiting messages to the user's workstation. The user can then read or otherwise process e-mail on her local workstation. This very simple system has served e-mail users well for many years. The Internet Message Access Protocol, or IMAP for short, was designed to overcome some of the limitations of POP.
Instead of transferring all e-mail to the client's workstation, IMAP retains the users e-mail on the server. The method used by POP is sometimes referred to as “offline” because once you've transferred your waiting messages, you could theoretically be disconnected while you read through your e-mail. The method primarily used with IMAP is considered “online” because it expects that you're connected the whole time that you're reading your e-mail. (This should not be confused with the sort of permanent connection expected from the old style SMTP-only model. IMAP is still a “connect when you want to read mail” type of protocol.) When you connect to an IMAP server, the headers of your new e-mail are downloaded to your e-mail client. As you browse through your new mail and select a message to read, the body of that message is transferred to your workstation. Deletion, read/unread status, and other status flags are synced back to the server. If this all seems like a more complicated protocol, it is. But only the server and client programs need to worry about that, and it results in some definite advantages over using POP.

Advantages of IMAP over POP

IMAP is a superset of POP, and it can do everything that POP can (though it doesn't always do it in quite the same way). In addition, IMAP introduces a number of new features. It's becoming increasingly more common for people to use more than one computer. One at work, another at home, and perhaps a laptop when traveling. POP doesn't lend itself well to checking mail on multiple machines, and your e-mail ends up spread out across all the different clients machines. Some POP clients try to partially account for this by having a “leave mail on server” option; but POP's inability to indicate read messages means downloading the same message multiple times, which is an inelegant solution at best. IMAP was designed with multiple clients in mind. Since the status of which e-mails have been read and which haven't (among other things) is stored on the server, you don't have to wade through seen messages even if you're connecting from a client machine you've never used before. Conservation of bandwidth may not seem like a huge deal in this day of ubiquitous high-speed connections, until you're traveling and have your laptop on a slow dial-up connection and that 10MB attachment isn't nearly as critical as the information in the messages immediately after it. Because IMAP only transfers the actual messages you request, you don't have to wait for (or pay for) downloading spam, large attachments, or other e-mails you're not immediately interested in. You can even download some MIME parts of a message and not others. Multiple mail folders allow for better organization of your saved mail.
While POP accounts for only a single INBOX, IMAP allows for multiple mailbox folders to be manipulated directly from your mail client. You can create, delete, rename and transfer or copy messages between different mailbox folders. Depending on you server setup, you may even be able to have hierarchical mailbox folders that contain both messages and other mailbox subfolders within them. IMAP supports shared folders. Not only can more than one person access the shared folders, but IMAP will manage concurrent access. This is particularly useful for role-based accounts, such as a or a help desk support mailbox, which may be accessed by multiple administrators. Searching is built in to the protocol. Searches are performed on the server side to again reduce the amount of data that needs to be transferred. Matching result sets are then returned to the client for selection. This can be a huge win for large archival mailboxes, and shouldn't be underestimated.
While a POP client that is kept online for long periods of time can be configured to poll the server occasionally for new mail, IMAP avoids this problem altogether. A client connected to an IMAP server can be notified directly of any new mail that arrives. Just in case the basic status indicators of read/unread, answered, important, and so on, aren't sufficient for your needs, IMAP allows user-defined Status flags. This means you can mark messages in ways that are uniquely meaningful to you and your needs. IMAP can even support non–e-mail applications. For example, you may set up a documentation repository as an anonymous read-only IMAP folder.
This could then be available to your entire company and accessed with any IMAP client. When the client performs its initial handshake with the server, it negotiates which capabilities are supported by both machines, and which are not. This structure makes it easy to add optional features to IMAP gradually, without a major upheaval to the protocol. For example, during the initial handshake, clients and servers can negotiate support for start-TLS–style encrypted connections, something that was not in the original IMAP specification. For those with pay-by-the-minute Internet connections, or other specialized circumstances that make POP's “offline” mode seem attractive, IMAP offers a “disconnected” mode. In this mode, all new mail is copied down to the client for local processing. Deletions and status changes are kept recorded by the client, and synced back to the server next time you connect. While this trades off connection time for bandwidth usage, it will still maintain many of the other advantages of using IMAP, such as a centralized mail store accessible by multipleclients, or custom status flags.
Even though IMAP often involves less bandwidth than blindly downloading everything, its download-as-you-go approach can make it seem more sluggish to former POP users that are used to a “hit the Get Mail button, go for coffee, then come back and read mail”approach. These users may be good candidates to start with “disconnected” mode.

What do Cc and Bcc mean in a mail message?


Cc is shorthand for Carbon copy. If you add a recipient's name to this box in an Outlook e-mail message, a copy of the message is sent to that recipient, and the recipient's name is visible to other recipients of the message.


Bcc is shorthand for Blind carbon copy. If you add a recipient's name to this box in a mail message, a copy of the message is sent to that recipient, and the recipient's name is not visible to other recipients of the message. If the Bcc box isn't visible when you create a new message, you can add it.