Friday, February 12, 2010

Postfix Virtual Domain Hosting Howto

SkyHi @ Friday, February 12, 2010

Purpose of this document

This document requires Postfix version 2.0 or later.

This document gives an overview of how Postfix can be used for hosting multiple Internet domains, both for final delivery on the machine itself and for the purpose of forwarding to destinations elsewhere.

The text not only describes delivery mechanisms that are built into Postfix, but also gives pointers for using non-Postfix mail delivery software.

The following topics are covered:

Canonical versus hosted versus other domains

Most Postfix systems are final destination for only a few domain names. These include the hostnames and [the IP addresses] of the machine that Postfix runs on, and sometimes also include the parent domain of the hostname. The remainder of this document will refer to these domains as the canonical domains. They are usually implemented with the Postfix local domain address class, as defined in the ADDRESS_CLASS_README file.

Besides the canonical domains, Postfix can be configured to be final destination for any number of additional domains. These domains are called hosted, because they are not directly associated with the name of the machine itself. Hosted domains are usually implemented with the virtual alias domain address class and/or with the virtual mailbox domain address class, as defined in the ADDRESS_CLASS_README file.

But wait! There is more. Postfix can be configured as a backup MX host for other domains. In this case Postfix is not the final destination for those domains. It merely queues the mail when the primary MX host is down, and forwards the mail when the primary MX host becomes available. This function is implemented with the relay domain address class, as defined in the ADDRESS_CLASS_README file.

Finally, Postfix can be configured as a transit host for sending mail across the internet. Obviously, Postfix is not final destination for such mail. This function is available only for authorized clients and/or users, and is implemented by the default domain address class, as defined in the ADDRESS_CLASS_README file.

Local files versus network databases

The examples in this text use table lookups from local files such as DBM or Berkeley DB. These are easy to debug with the postmap command:

Example: postmap -q info@example.com hash:/etc/postfix/virtual

See the documentation in LDAP_README, MYSQL_README and PGSQL_README for how to replace local files by databases. The reader is strongly advised to make the system work with local files before migrating to network databases, and to use the postmap command to verify that network database lookups produce the exact same results as local file lookup.

Example: postmap -q info@example.com ldap:/etc/postfix/virtual.cf

As simple as can be: shared domains, UNIX system accounts

The simplest method to host an additional domain is to add the domain name to the domains listed in the Postfix mydestination configuration parameter, and to add the user names to the UNIX password file.

This approach makes no distinction between canonical and hosted domains. Each username can receive mail in every domain.

In the examples we will use "example.com" as the domain that is being hosted on the local Postfix machine.

/etc/postfix/main.cf:
mydestination = $myhostname localhost.$mydomain ... example.com , /etc/postfix/local-host-names

The limitations of this approach are:

  • A total lack of separation: mail for info@my.host.name is delivered to the same UNIX system account as mail for info@example.com.
  • With users in the UNIX password file, administration of large numbers of users becomes inconvenient.

The examples that follow provide solutions for both limitations.


[root@postfixpam ~]# grep mydestination /etc/postfix/main.cf
# The mydestination parameter specifies the list of domains that this
#mydestination = $myhostname, localhost.$mydomain, localhost
#mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain
#mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain,
#mydestination = $myhostname, localhost.$mydomain, localhost
#mydestination = $virtual_alias_domains
mydestination = $myhostname, localhost.$mydomain, localhost,/etc/postfix/local-host-names
# to $mydestination, $inet_interfaces or $proxy_interfaces.
# - You define $mydestination domain recipients in files other than
# For example, you define $mydestination domain recipients in
# response code when a recipient domain matches $mydestination or
# The default relay_domains value is $mydestination.
# - destinations that match $mydestination
#relay_domains = $mydestination
# for unknown recipients. By default, mail for unknown@$mydestination,
[root@postfixpam ~]# grep virtual_alias /etc/postfix/main.cf
#mydestination = $virtual_alias_domains
# /etc/passwd, /etc/aliases, or the $virtual_alias_maps files.
# - destinations that match $virtual_alias_domains,
#virtual_alias_domains = /etc/postfix/local-host-names
virtual_alias_maps = hash:/etc/postfix/virtusertable, hash:/etc/postfix/aliases, hash:/etc/postfix/virtual


Postfix virtual ALIAS example: separate domains, UNIX system accounts

With the approach described in this section, every hosted domain can have its own info etc. email address. However, it still uses UNIX system accounts for local mailbox deliveries.

With virtual alias domains, each hosted address is aliased to a local UNIX system account or to a remote address. The example below shows how to use this mechanism for the example.com domain.

 1 /etc/postfix/main.cf:
2 virtual_alias_domains = example.com ...other hosted domains...
3 virtual_alias_maps = hash:/etc/postfix/virtual
4
5 /etc/postfix/virtual:
6 postmaster@example.com postmaster
7 info@example.com joe
8 sales@example.com jane
9 # Uncomment entry below to implement a catch-all address
10 # @example.com jim
11 ...virtual aliases for more domains...

Notes:

  • Line 2: the virtual_alias_domains setting tells Postfix that example.com is a so-called virtual alias domain. If you omit this setting then Postfix will reject mail (relay access denied) or will not be able to deliver it (mail for example.com loops back to myself).

    NEVER list a virtual alias domain name as a mydestination domain!

  • Lines 3-8: the /etc/postfix/virtual file contains the virtual aliases. With the example above, mail for postmaster@example.com goes to the local postmaster, while mail for info@example.com goes to the UNIX account joe, and mail for sales@example.com goes to the UNIX account jane. Mail for all other addresses in example.com is rejected with the error message "User unknown".

  • Line 10: the commented out entry (text after #) shows how one would implement a catch-all virtual alias that receives mail for every example.com address not listed in the virtual alias file. This is not without risk. Spammers nowadays try to send mail from (or mail to) every possible name that they can think of. A catch-all mailbox is likely to receive many spam messages, and many bounces for spam messages that were sent in the name of anything@example.com.

Execute the command "postmap /etc/postfix/virtual" after changing the virtual file, and execute the command "postfix reload" after changing the main.cf file.

Note: virtual aliases can resolve to a local address or to a remote address, or both. They don't have to resolve to UNIX system accounts on your machine.

More details about the virtual alias file are given in the virtual(5) manual page, including multiple addresses on the right-hand side.

Virtual aliasing solves one problem: it allows each domain to have its own info mail address. But there still is one drawback: each virtual address is aliased to a UNIX system account. As you add more virtual addresses you also add more UNIX system accounts. The next section eliminates this problem.

Postfix virtual MAILBOX example: separate domains, non-UNIX accounts

As a system hosts more and more domains and users, it becomes less desirable to give every user their own UNIX system account.

With the Postfix virtual(8) mailbox delivery agent, every recipient address can have its own virtual mailbox. Unlike virtual alias domains, virtual mailbox domains do not need the clumsy translation from each recipient addresses into a different address, and owners of a virtual mailbox address do not need to have a UNIX system account.

The Postfix virtual(8) mailbox delivery agent looks up the user mailbox pathname, uid and gid via separate tables that are searched with the recipient's mail address. Maildir style delivery is turned on by terminating the mailbox pathname with "/".

If you find the idea of multiple tables bothersome, remember that you can migrate the information (once it works), to an SQL database. If you take that route, be sure to review the "local files versus databases" section at the top of this document.

Here is an example of a virtual mailbox domain "example.com":

 1 /etc/postfix/main.cf:
2 virtual_mailbox_domains = example.com ...more domains...
3 virtual_mailbox_base = /var/mail/vhosts
4 virtual_mailbox_maps = hash:/etc/postfix/vmailbox
5 virtual_minimum_uid = 100
6 virtual_uid_maps = static:5000
7 virtual_gid_maps = static:5000
8 virtual_alias_maps = hash:/etc/postfix/virtual
9
10 /etc/postfix/vmailbox:
11 info@example.com example.com/info
12 sales@example.com example.com/sales/
13 # Comment out the entry below to implement a catch-all.
14 # @example.com example.com/catchall
15 ...virtual mailboxes for more domains...
16
17 /etc/postfix/virtual:
18 postmaster@example.com postmaster

Notes:

  • Line 2: The virtual_mailbox_domains setting tells Postfix that example.com is a so-called virtual mailbox domain. If you omit this setting then Postfix will reject mail (relay access denied) or will not be able to deliver it (mail for example.com loops back to myself).

    NEVER list a virtual MAILBOX domain name as a mydestination domain!

    NEVER list a virtual MAILBOX domain name as a virtual ALIAS domain!

  • Line 3: The virtual_mailbox_base parameter specifies a prefix for all virtual mailbox pathnames. This is a safety mechanism in case someone makes a mistake. It prevents mail from being delivered all over the file system.

  • Lines 4, 10-15: The virtual_mailbox_maps parameter specifies the lookup table with mailbox (or maildir) pathnames, indexed by the virtual mail address. In this example, mail for info@example.com goes to the mailbox at /var/mail/vhosts/example.com/info while mail for sales@example.com goes to the maildir located at /var/mail/vhosts/example.com/sales/.

  • Line 5: The virtual_minimum_uid specifies a lower bound on the mailbox or maildir owner's UID. This is a safety mechanism in case someone makes a mistake. It prevents mail from being written to sensitive files.

  • Lines 6, 7: The virtual_uid_maps and virtual_gid_maps parameters specify that all the virtual mailboxes are owned by a fixed uid and gid 5000. If this is not what you want, specify lookup tables that are searched by the recipient's mail address.

  • Line 14: The commented out entry (text after #) shows how one would implement a catch-all virtual mailbox address. Be prepared to receive a lot of spam, as well as bounced spam that was sent in the name of anything@example.com.

    NEVER put a virtual MAILBOX wild-card in the virtual ALIAS file!!

  • Lines 8, 17, 18: As you see, it is possible to mix virtual aliases with virtual mailboxes. We use this feature to redirect mail for example.com's postmaster address to the local postmaster. You can use the same mechanism to redirect an address to a remote address.

  • Line 18: This example assumes that in main.cf, $myorigin is listed under the mydestination parameter setting. If that is not the case, specify an explicit domain name on the right-hand side of the virtual alias table entries or else mail will go to the wrong domain.

Execute the command "postmap /etc/postfix/virtual" after changing the virtual file, execute "postmap /etc/postfix/vmailbox" after changing the vmailbox file, and execute the command "postfix reload" after changing the main.cf file.

Note: mail delivery happens with the recipient's UID/GID privileges specified with virtual_uid_maps and virtual_gid_maps. Postfix 2.0 and earlier will not create mailDIRs in world-writable parent directories; you must create them in advance before you can use them. Postfix may be able to create mailBOX files by itself, depending on parent directory write permissions, but it is safer to create mailBOX files ahead of time.

More details about the virtual mailbox delivery agent are given in the virtual(8) manual page.

Non-Postfix mailbox store: separate domains, non-UNIX accounts

This is a variation on the Postfix virtual mailbox example. Again, every hosted address can have its own mailbox.

While non-Postfix software is being used for final delivery, some Postfix concepts are still needed in order to glue everything together. For additional background on this glue you may want to take a look at the virtual mailbox domain class as defined in the ADDRESS_CLASS_README file.

The text in this section describes what things should look like from Postfix's point of view. See CYRUS_README or MAILDROP_README for specific information about Cyrus or about Courier maildrop.

Here is an example for a hosted domain example.com that delivers to a non-Postfix delivery agent:

 1 /etc/postfix/main.cf:
2 virtual_transport = ...see below...
3 virtual_mailbox_domains = example.com ...more domains...
4 virtual_mailbox_maps = hash:/etc/postfix/vmailbox
5 virtual_alias_maps = hash:/etc/postfix/virtual
6
7 /etc/postfix/vmailbox:
8 info@example.com whatever
9 sales@example.com whatever
10 # Comment out the entry below to implement a catch-all.
11 # Configure the mailbox store to accept all addresses.
12 # @example.com whatever
13 ...virtual mailboxes for more domains...
14
15 /etc/postfix/virtual:
16 postmaster@example.com postmaster

Notes:

  • Line 2: With delivery to a non-Postfix mailbox store for hosted domains, the virtual_transport parameter usually specifies the Postfix LMTP client, or the name of a master.cf entry that executes non-Postfix software via the pipe delivery agent. Typical examples (use only one):

    virtual_transport = lmtp:unix:/path/name (uses UNIX-domain socket)
    virtual_transport = lmtp:hostname:port (uses TCP socket)
    virtual_transport = maildrop: (uses pipe(8) to command)

    Postfix comes ready with support for LMTP. And an example maildrop delivery method is already defined in the default Postfix master.cf file. See the MAILDROP_README document for more details.

  • Line 3: The virtual_mailbox_domains setting tells Postfix that example.com is delivered via the virtual_transport that was discussed in the previous paragraph. If you omit this virtual_mailbox_domains setting then Postfix will either reject mail (relay access denied) or will not be able to deliver it (mail for example.com loops back to myself).

    NEVER list a virtual MAILBOX domain name as a mydestination domain!

    NEVER list a virtual MAILBOX domain name as a virtual ALIAS domain!

  • Lines 4, 7-13: The virtual_mailbox_maps parameter specifies the lookup table with all valid recipient addresses. The lookup result value is ignored by Postfix. In the above example, info@example.com and sales@example.com are listed as valid addresses; other mail for example.com is rejected with "User unknown" by the Postfix SMTP server. It's left up to the non-Postfix delivery agent to reject non-existent recipients from local submission or from local alias expansion. If you intend to use LDAP, MySQL or PgSQL instead of local files, be sure to review the "local files versus databases" section at the top of this document!

  • Line 12: The commented out entry (text after #) shows how one would inform Postfix of the existence of a catch-all address. Again, the lookup result is ignored by Postfix.

    NEVER put a virtual MAILBOX wild-card in the virtual ALIAS file!!

    Note: if you specify a wildcard in virtual_mailbox_maps, then you still need to configure the non-Postfix mailbox store to receive mail for any address in that domain.

  • Lines 5, 15, 16: As you see above, it is possible to mix virtual aliases with virtual mailboxes. We use this feature to redirect mail for example.com's postmaster address to the local postmaster. You can use the same mechanism to redirect any addresses to a local or remote address.

  • Line 16: This example assumes that in main.cf, $myorigin is listed under the mydestination parameter setting. If that is not the case, specify an explicit domain name on the right-hand side of the virtual alias table entries or else mail will go to the wrong domain.

Execute the command "postmap /etc/postfix/virtual" after changing the virtual file, execute "postmap /etc/postfix/vmailbox" after changing the vmailbox file, and execute the command "postfix reload" after changing the main.cf file.

Mail forwarding domains

Some providers host domains that have no (or only a few) local mailboxes. The main purpose of these domains is to forward mail elsewhere. The following example shows how to set up example.com as a mail forwarding domain:

 1 /etc/postfix/main.cf:
2 virtual_alias_domains = example.com ...other hosted domains...
3 virtual_alias_maps = hash:/etc/postfix/virtual
4
5 /etc/postfix/virtual:
6 postmaster@example.com postmaster
7 joe@example.com joe@somewhere
8 jane@example.com jane@somewhere-else
9 # Uncomment entry below to implement a catch-all address
10 # @example.com jim@yet-another-site
11 ...virtual aliases for more domains...

Notes:

  • Line 2: The virtual_alias_domains setting tells Postfix that example.com is a so-called virtual alias domain. If you omit this setting then Postfix will reject mail (relay access denied) or will not be able to deliver it (mail for example.com loops back to myself).

    NEVER list a virtual alias domain name as a mydestination domain!

  • Lines 3-11: The /etc/postfix/virtual file contains the virtual aliases. With the example above, mail for postmaster@example.com goes to the local postmaster, while mail for joe@example.com goes to the remote address joe@somewhere, and mail for jane@example.com goes to the remote address jane@somewhere-else. Mail for all other addresses in example.com is rejected with the error message "User unknown".

  • Line 10: The commented out entry (text after #) shows how one would implement a catch-all virtual alias that receives mail for every example.com address not listed in the virtual alias file. This is not without risk. Spammers nowadays try to send mail from (or mail to) every possible name that they can think of. A catch-all mailbox is likely to receive many spam messages, and many bounces for spam messages that were sent in the name of anything@example.com.

Execute the command "postmap /etc/postfix/virtual" after changing the virtual file, and execute the command "postfix reload" after changing the main.cf file.

More details about the virtual alias file are given in the virtual(5) manual page, including multiple addresses on the right-hand side.

Mailing lists

The examples that were given above already show how to direct mail for virtual postmaster addresses to a local postmaster. You can use the same method to direct mail for any address to a local or remote address.

There is one major limitation: virtual aliases and virtual mailboxes can't directly deliver to mailing list managers such as majordomo. The solution is to set up virtual aliases that direct virtual addresses to the local delivery agent:

/etc/postfix/main.cf:
virtual_alias_maps = hash:/etc/postfix/virtual

/etc/postfix/virtual:
listname-request@example.com listname-request
listname@example.com listname
owner-listname@example.com owner-listname

/etc/aliases:
listname: "|/some/where/majordomo/wrapper ..."
owner-listname: ...
listname-request: ...

This example assumes that in main.cf, $myorigin is listed under the mydestination parameter setting. If that is not the case, specify an explicit domain name on the right-hand side of the virtual alias table entries or else mail will go to the wrong domain.

More information about the Postfix local delivery agent can be found in the local(8) manual page.

Why does this example use a clumsy virtual alias instead of a more elegant transport mapping? The reason is that mail for the virtual mailing list would be rejected with "User unknown". In order to make the transport mapping work one would still need a bunch of virtual alias or virtual mailbox table entries.

  • In case of a virtual alias domain, there would need to be one identity mapping from each mailing list address to itself.
  • In case of a virtual mailbox domain, there would need to be a dummy mailbox for each mailing list address.

Autoreplies

In order to set up an autoreply for virtual recipients while still delivering mail as normal, set up a rule in a virtual alias table:

/etc/postfix/main.cf:
virtual_alias_maps = hash:/etc/postfix/virtual

/etc/postfix/virtual:
user@domain.tld user@domain.tld, user@domain.tld@autoreply.mydomain.tld

This delivers mail to the recipient, and sends a copy of the mail to the address that produces automatic replies. The address can be serviced on a different machine, or it can be serviced locally by setting up a transport map entry that pipes all mail for autoreply.mydomain.tld into some script that sends an automatic reply back to the sender.

DO NOT list autoreply.mydomain.tld in mydestination!

/etc/postfix/main.cf:
transport_maps = hash:/etc/postfix/transport

/etc/postfix/transport:
autoreply.mydomain.tld autoreply:

/etc/postfix/master.cf:
# =============================================================
# service type private unpriv chroot wakeup maxproc command
# (yes) (yes) (yes) (never) (100)
# =============================================================
autoreply unix - n n - - pipe
flags= user=nobody argv=/path/to/autoreply $sender $mailbox

This invokes /path/to/autoreply with the sender address and the user@domain.tld recipient address on the command line.

For more information, see the pipe(8) manual page, and the comments in the Postfix master.cf file.


REFERENCE

http://www.postfix.org/VIRTUAL_README.html

Instructions to Import GeoNetMap in MySql

SkyHi @ Friday, February 12, 2010

The following article assumes you have already created a database or are planning to import it into an existing database.

1. Copy all the map files into the databases data directory. For example: c:\mysql\data\db_name

2. Download the MySqlGeoNetMapImport.sql script file and save the file to the current directory.

3. Open a MySql command window, and select the target database. ie Use db_name

4. Run the following command

source MySqlGeoNetMapImport.sql;

4. You can run the following query to test whether the importation succeeded:

select city
From Cities c
inner Join Subnets s on s.CityId=c.CityId
Where SubnetAddress='207.71.204';

If the tables are created succesfully but no data is in them, try executing this code here.

Instructions on performing a database update

1. Copy all the updated map files into the databases data directory. For example: c:\mysql\data\db_name

2. Open a MySql command window and select target database. ie Use db_name

3. Download the MySqlGeoNetMapUpdate.sql script file and save the file to the current directory.

4. Run the following command:

source MySqlGeoNetMapUpdate.sql;

Reference
http://epicenter.geobytes.com/MySqlInstructions.htm


4.5.1.4. Executing SQL Statements from a Text File

The mysql client typically is used interactively, like this:

shell> mysql db_name

However, it is also possible to put your SQL statements in a file and then tell mysql to read its input from that file. To do so, create a text file text_file that contains the statements you wish to execute. Then invoke mysql as shown here:

shell> mysql db_name < text_file

If you place a USE db_name statement as the first statement in the file, it is unnecessary to specify the database name on the command line:

shell> mysql <>

If you are already running mysql, you can execute an SQL script file using the source command or \. command:

mysql> source file_name
mysql> \. file_name

Sometimes you may want your script to display progress information to the user. For this you can insert statements like this:

SELECT '' AS ' ';

The statement shown outputs .

You can also invoke mysql with the --verbose option, which causes each statement to be displayed before the result that it produces.

As of MySQL 5.0.54, mysql ignores Unicode byte order mark (BOM) characters at the beginning of input files. Previously, it read them and sent them to the server, resulting in a syntax error. Presence of a BOM does not cause mysql to change its default character set. To do that, invoke mysql with an option such as --default-character-set=utf8.

For more information about batch mode, see Section 3.5, “Using mysql in Batch Mode”.


User Comments

Posted by Sanjay Ichalkaranje on October 14 2003 4:21am[Delete] [Edit]

To run two sql scripts at a time you can use cat command available in Linux.

cat file1.sql file2.sql | mysql -u USERNAME -p

Posted by Jeff Zohrab on December 4 2005 6:14pm[Delete] [Edit]

For windows users, use forward slashes for the path delimiters. You also don't need to enclose the path to the file in quotes. E.g., the following works:

mysql> source C:/Documents and Settings/My name here/My Documents/spike_loadingMySQLDB/createTables.sql;

Posted by Dan Zaner on April 12 2006 9:43pm[Delete] [Edit]

If you are attempting to use a batch file that is UTF8-encoded (which will handle all your accented latin characters as well as chinese, japanese, etc.), make sure that you start 'mysql' with the '--default-character-set=utf8' option or you will end up with whatever the server default is. If the server default is not utf8, your batch file will most likely produce undesireable results.

Posted by theo theunissen on October 21 2009 4:20pm[Delete] [Edit]

We use subversion for both code and MySql database changes (script and data).

1. We have created a file /path/to/script/database.sql that contains the database changes. This file is committed
2. We have a bash script to update both code and executes MySQL changes

The script looks like:
#/bin/bash
svn up
mysql -u -p -h < /path/to/updated_script/database.sql
--
If the file database.sql is empty, then nothing is changed.

Reference
http://dev.mysql.com/doc/refman/5.0/en/batch-commands.html



Load text into mysql command line:

1. create a database.

>create databases test;

2. copy the extracted zip files(files you wanted to import) into the /var/lib/mysql/test/

3. #cat MySqlGeoNetMapImport.sql //comment out what you want to load

4. a . mysql -uroot -pPassword
b. >use test;
c. > source /var/lib/mysql/test/MysqlGeoNetMapImport.sql

5. done.



Thursday, February 11, 2010

How to Add Google Analytics to Your Blogger Blog

SkyHi @ Thursday, February 11, 2010

Grab Your Google Analytics Code Block

  1. Login to Google Analytics at http://google.com/analytics/. The main Settings page loads.
  2. Click on Add Website Profile. A form displays.
  3. Select Add a Profile for a New Domain.
  4. Enter the URL of your site or blog.
  5. Select your country and time zone. Click Finish.
  6. Analytics provides you with a code block – a swatch of HTML – to add to your site’s pages.
  7. Highlight the code block and then copy it by selecting Edit > Copy or Ctrl-C or Command-C.

Add the Google Analytics Code Block to Your Blogger Blog

  1. Login to http://www.blogger.com/. The Dashboard loads.
  2. Under the blog you want to add Analytics tracking to, click on Layout or Template.
  3. Click on Edit HTML. An editing screen for your blog template’s HTML displays. Don’t freak out. Just scroll to the bottom.
  4. Look for the end of the template. It’ll look like:

</div> </div>
<!– end outer-wrapper –>
(Google Analytics Code Block is going to go here!!!)
</body>
</html>

  1. Put your cursor right before that </body> tag.
  2. Paste the Google Analytics Code Block by selecting Edit > Paste, Ctrl -V or Command-V.
  3. Click Save Changes.

You have now added the Google Analytics Code Block to Your Blogger Blog.

Check Your Work

  1. To ensure that you have successfully added the Google Analytics Code Block to your Blogger blog, go back to http://google.com/analytics/.
  2. Next to your blog’s URL it will say either Receiving Data (you were successful) or Tracking Not Installed (something is amiss).
  3. If it said Tracking Not Installed, click on Check Status. Google then checks your blog for the Analytics Code Block and reports back if it find it or not.
  4. If not, try re-pasting the Code Block in.

REFERENCE
http://www.andywibbels.com/how-to-add-google-analytics-to-your-blogger-blog/

“How to” Dual boot Ubuntu and Windows 7 (Ubuntu installed first)

SkyHi @ Thursday, February 11, 2010
“How to” Dual boot Ubuntu and Windows 7 (Ubuntu installed first)

##This guide does not work for Karamic or future releases, For that see page 14 and follow the link provided by presence1960 for how to dual boot with Grub2 (requires some reading) I will update this guide for grub2 when I have more time##

I have recently seen many posts from people trying to dual boot Ubuntu and Windows 7 beta, but not succeeding. So I tried it out myself and found a solution.
Index
1. Obtain a copy of Windows7.
2. Partition your disk with gparted.
3. Install Windows7.
4. Re-install Grub.
5. Edit Grub to List Windows 7.
6. Have Fun.
__________________________________________________ ________________________________

1. Obtain a copy of windows 7.

*You can also find a torrent of this but for legal reasons I cannot provide a link. *


2. Partition your disk

**This does go wrong in some cases, if in doubt back up your valuable data.**

Boot from a Ubuntu live cd or a gparted live cd.
Start up gparted, If ubuntu is on the whole disk you need to re-size it by at least 8 gb for Windows 7. (Make sure windows 7 is on the second partition to make it easier for grub) You will be left with some unallocated space on your hard disk if you want you can partition it to NTFS or you can do it later on the windows install.

3. Install Windows 7

Follow the on screen instructions, Select the un-partitioned space to format and install windows on, or if you already made it NTFS choose your NTFS partition.

**It will ask for a product key but you have 30 days to do that. Note: Beta keys will work with the RC**


4. Re-install GRUB

Now you have windows 7 but it has completely eaten your boot loader so you need to re-install grub.
Boot from the ubuntu live cd and go to terminal.
Type in terminal:

"sudo grub"
"grub> find /boot/grub/stage1"

That should return your Ubuntu partition in the form of (hdX,Y), use that:

grub> root (hdX,Y)
grub> setup (hd0)
grub> quit

(you don’t need to type the grub> bit)

That has re-installed grub but you can no longer see windows7

5. Edit grub.
Go to terminal from normal ubuntu and type :

“sudo gedit /boot/grub/menu.lst”

A large text file will open and at the bottom leave a line and add this:

title windows 7 beta (Loader)
root (hd0,1)
savedefault
makeactive
chainloader +1

(Do not type this line but if that does not work on re-boot try “hdo,0 or hd0,2” and so on until it works.)

Now that is done you can re-boot into windows 7 and ubuntu happily

******************Edit***********************
Hi
I have remembered that if you also have vista installed on your machine when you in install 7, that windows 7 will add itself to the vista bootloader.

So You will need to point grub to the vista partition so it will load the vista loader and give you the option for 7 and vista.

Also To work out what partition number your 7 partiton is use gparted it will give you results like "Windows 7 sda2" that means hda0,2 or if you have two internal hard drives than change the tab in the top right to the appropriate disk. Then take note of the sda2 but as it is on the 2nd drive it will be hda1,2. And so on..........
****************Edit*************************

REFERENCE
http://ubuntuforums.org/showthread.php?t=1035999

Proactive Security Challenge

SkyHi @ Thursday, February 11, 2010

Linux / UNIX Tar Full and Incremental Tape Backup Shell Script

SkyHi @ Thursday, February 11, 2010
#!/bin/bash
# A UNIX / Linux shell script to backup dirs to tape device like /dev/st0 (linux)
# This script make both full and incremental backups.
# You need at two sets of five tapes. Label each tape as Mon, Tue, Wed, Thu and Fri.
# You can run script at midnight or early morning each day using cronjons.
# The operator or sys admin can replace the tape every day after the script has done.
# Script must run as root or configure permission via sudo.
# -------------------------------------------------------------------------
# Copyright (c) 1999 Vivek Gite
# This script is licensed under GNU GPL version 2.0 or above
# -------------------------------------------------------------------------
# This script is part of nixCraft shell script collection (NSSC)
# Visit http://bash.cyberciti.biz/ for more information.
# -------------------------------------------------------------------------
# Last updated on : March-2003 - Added log file support.
# Last updated on : Feb-2007 - Added support for excluding files / dirs.
# -------------------------------------------------------------------------
LOGBASE=/root/backup/log

# Backup dirs; do not prefix /
BACKUP_ROOT_DIR="home sales"

# Get todays day like Mon, Tue and so on
NOW=$(date +"%a")

# Tape devie name
TAPE="/dev/st0"

# Exclude file
TAR_ARGS=""
EXCLUDE_CONF=/root/.backup.exclude.conf

# Backup Log file
LOGFIILE=$LOGBASE/$NOW.backup.log

# Path to binaries
TAR=/bin/tar
MT=/bin/mt
MKDIR=/bin/mkdir

# ------------------------------------------------------------------------
# Excluding files when using tar
# Create a file called $EXCLUDE_CONF using a text editor
# Add files matching patterns such as follows (regex allowed):
# home/vivek/iso
# home/vivek/*.cpp~
# ------------------------------------------------------------------------
[ -f $EXCLUDE_CONF ] && TAR_ARGS="-X $EXCLUDE_CONF"

#### Custom functions #####
# Make a full backup
full_backup(){
local old=$(pwd)
cd /
$TAR $TAR_ARGS -cvpf $TAPE $BACKUP_ROOT_DIR
$MT -f $TAPE rewind
$MT -f $TAPE offline
cd $old
}

# Make a partial backup
partial_backup(){
local old=$(pwd)
cd /
$TAR $TAR_ARGS -cvpf $TAPE -N "$(date -d '1 day ago')" $BACKUP_ROOT_DIR
$MT -f $TAPE rewind
$MT -f $TAPE offline
cd $old
}

# Make sure all dirs exits
verify_backup_dirs(){
local s=0
for d in $BACKUP_ROOT_DIR
do
if [ ! -d /$d ];
then
echo "Error : /$d directory does not exits!"
s=1
fi
done
# if not; just die
[ $s -eq 1 ] && exit 1
}

#### Main logic ####

# Make sure log dir exits
[ ! -d $LOGBASE ] && $MKDIR -p $LOGBASE

# Verify dirs
verify_backup_dirs

# Okay let us start backup procedure
# If it is monday make a full backup;
# For Tue to Fri make a partial backup
# Weekend no backups
case $NOW in
Mon) full_backup;;
Tue|Wed|Thu|Fri) partial_backup;;
*) ;;
esac > $LOGFILE 2>&1Install this script using cronjob.

See how to use tar and mt command.
To restore files / data from tar archives.

List the files:
# tar tvf /dev/st0
Extract the entire archive into current directory:
# tar xvpf /dev/st0
Extract only certain files or dirs into current directory. For example. Extract only home/vivek directory
# tar xvpf /dev/st0 home/vivek
You can also restore one file:
# tar xvpf /dev/st0 home/vivek/app/src/main.c

REFERENCE
http://bash.cyberciti.biz/backup/tar-full-incremental-tape-backup-script/

HowTo: Backup MySQL Databases, Web server Files to a FTP Server Automatically

SkyHi @ Thursday, February 11, 2010
This is a simple backup solution for people who run their own web server and MySQL database server on a dedicated or VPS server. Most dedicated hosting provider provides backup service using NAS or FTP servers. These service providers will hook you to their redundant centralized storage array over private VLAN. Since, I manage couple of boxes, here is my own automated solution. If you just want a shell script, go here (you just need to provided appropriate input and it will generate FTP backup script for you on fly, you can also grab my php script generator code).
Making Incremental Backups With tar

You can make tape backups. However, sometime tape is not an option. GNU tar allows you to make incremental backups with -g option. In this example, tar command will make incremental backup of /var/www/html, /home, and /etc directories, run:
# tar -g /var/log/tar-incremental.log -zcvf /backup/today.tar.gz /var/www/html /home /etc

Where,

* -g: Create/list/extract new GNU-format incremental backup and store information to /var/log/tar-incremental.log file.

Making MySQL Databases Backup

mysqldump is a client program for dumping or backing up mysql databases, tables and data. For example, the following command displays the list of databases:
$ mysql -u root -h localhost -p -Bse 'show databases'

Output:

Enter password:
brutelog
cake
faqs
mysql
phpads
snews
test
tmp
van
wp

Next, you can backup each database with the mysqldump command:
$ mysqldump -u root -h localhost -pmypassword faqs | gzip -9 > faqs-db.sql.gz
Creating A Simple Backup System For Your Installation

The main advantage of using FTP or NAS backup is a protection from data loss. You can use various protocols to backup data:

1. FTP
2. SSH
3. RSYNC
4. Other Commercial solutions

However, I am going to write about FTP backup solution here. The idea is as follows:

* Make a full backup every Sunday night i.e. backup everything every Sunday
* Next backup only those files that has been modified since the full backup (incremental backup).
* This is a seven-day backup cycle.

Our Sample Setup

Your-server ===> ftp/nas server
IP:202.54.1.10 ===> 208.111.2.5

Let us assume that your ftp login details are as follows:

* FTP server IP: 208.111.2.5
* FTP Username: nixcraft
* FTP Password: somepassword
* FTP Directory: /home/nixcraft (or /)

You store all data as follows:
=> /home/nixcraft/full/mm-dd-yy/files - Full backup
=> /home/nixcraft/incremental/mm-dd-yy/files - Incremental backup
Automating Backup With tar

Now, you know how to backup files and mysql databases using the tar and mysqldump commands. It is time to write a shell script that will automate entire procedure:

1. First, our script will collect all data from both MySQL database server and file system into a temporary directory called /backup using a tar command.
2. Next, script will login to your ftp server and create a directory structure as discussed above.
3. Script will dump all files from /backup to the ftp server.
4. Script will remove temporary backup from /backup directory.
5. Script will send you an email notification if ftp backups failed due to any reason.

You must have the following commands installed (use yum or apt-get package manager to install ftp client called ncftp):

* ncftp ftp client
* mysqldump command
* GNU tar command

Here is the sample script:

#!/bin/sh
# System + MySQL backup script
# Full backup day - Sun (rest of the day do incremental backup)
# Copyright (c) 2005-2006 nixCraft
# This script is licensed under GNU GPL version 2.0 or above
# Automatically generated by http://bash.cyberciti.biz/backup/wizard-ftp-script.php
# ---------------------------------------------------------------------
### System Setup ###
DIRS="/home /etc /var/www"
BACKUP=/tmp/backup.$$
NOW=$(date +"%d-%m-%Y")
INCFILE="/root/tar-inc-backup.dat"
DAY=$(date +"%a")
FULLBACKUP="Sun"
### MySQL Setup ###
MUSER="admin"
MPASS="mysqladminpassword"
MHOST="localhost"
MYSQL="$(which mysql)"
MYSQLDUMP="$(which mysqldump)"
GZIP="$(which gzip)"
### FTP server Setup ###
FTPD="/home/vivek/incremental"
FTPU="vivek"
FTPP="ftppassword"
FTPS="208.111.11.2"
NCFTP="$(which ncftpput)"
### Other stuff ###
EMAILID="admin@theos.in"
### Start Backup for file system ###
[ ! -d $BACKUP ] && mkdir -p $BACKUP || :
### See if we want to make a full backup ###
if [ "$DAY" == "$FULLBACKUP" ]; then
FTPD="/home/vivek/full"
FILE="fs-full-$NOW.tar.gz"
tar -zcvf $BACKUP/$FILE $DIRS
else
i=$(date +"%Hh%Mm%Ss")
FILE="fs-i-$NOW-$i.tar.gz"
tar -g $INCFILE -zcvf $BACKUP/$FILE $DIRS
fi
### Start MySQL Backup ###
# Get all databases name
DBS="$($MYSQL -u $MUSER -h $MHOST -p$MPASS -Bse 'show databases')"
for db in $DBS
do
FILE=$BACKUP/mysql-$db.$NOW-$(date +"%T").gz
$MYSQLDUMP -u $MUSER -h $MHOST -p$MPASS $db | $GZIP -9 > $FILE
done
### Dump backup using FTP ###
#Start FTP backup using ncftp
ncftp -u"$FTPU" -p"$FTPP" $FTPS<mkdir $FTPD
mkdir $FTPD/$NOW
cd $FTPD/$NOW
lcd $BACKUP
mput *
quit
EOF
### Find out if ftp backup failed or not ###
if [ "$?" == "0" ]; then
rm -f $BACKUP/*
else
T=/tmp/backup.fail
echo "Date: $(date)">$T
echo "Hostname: $(hostname)" >>$T
echo "Backup failed" >>$T
mail -s "BACKUP FAILED" "$EMAILID" <$T
rm -f $T
fi


How Do I Setup a Cron Job To Backup Data Automatically?

Just add cron job as per your requirements:
13 0 * * * /home/admin/bin/ftpbackup.sh >/dev/null 2>&1
Generate FTP backup script

Since I setup many Linux boxes, here is my own FTP backup script generator. You just need to provided appropriate input and it will generate FTP backup script for you on fly.

REFERENCE
http://www.cyberciti.biz/tips/how-to-backup-mysql-databases-web-server-files-to-a-ftp-server-automatically.html

Automating backups with tar

SkyHi @ Thursday, February 11, 2010

Automating backups with tar

It is always interesting to automate the tasks of a backup. Automation offers enormous opportunities for using your Linux server to achieve the goals you set. The following example below is our backup script, called backup.cron. This script is designed to run on any computer by changing only the four variables:
  1. COMPUTER
  2. DIRECTORIES
  3. BACKUPDIR
  4. TIMEDIR
We suggest that you set this script up and run it at the beginning of the month for the first time, and then run it for a month before making major changes. In our example below we do the backup to a directory on the local server BACKUPDIR, but you could modify this script to do it to a tape on the local server or via an NFS mounted file system.
  1. Create the backup script backup.cron file, touch /etc/cron.daily/backup.cron and add the following lines to this backup file:
    #!/bin/sh
    # full and incremental backup script
    # created 07 February 2000
    # Based on a script by Daniel O'Callaghan 
    # and modified by Gerhard Mourani 
    
    #Change the 5 variables below to fit your computer/backup
    
    COMPUTER=deep                            # name of this computer
    DIRECTORIES="/home"                      # directoris to backup
    BACKUPDIR=/backups                       # where to store the backups
    TIMEDIR=/backups/last-full               # where to store time of full backup
    TAR=/bin/tar                              # name and locaction of tar
    
    #You should not have to change anything below here
    
    PATH=/usr/local/bin:/usr/bin:/bin
    DOW=`date +%a`                # Day of the week e.g. Mon
    DOM=`date +%d`                # Date of the Month e.g. 27
    DM=`date +%d%b`              # Date and Month e.g. 27Sep
    
    # On the 1st of the month a permanet full backup is made
    # Every Sunday a full backup is made - overwriting last Sundays backup
    # The rest of the time an incremental backup is made. Each incremental
    # backup overwrites last weeks incremental backup of the same name.
    #
    # if NEWER = "", then tar backs up all files in the directories
    # otherwise it backs up files newer than the NEWER date. NEWER
    # gets it date from the file written every Sunday.
    
    
    # Monthly full backup
    if [ $DOM = "01" ]; then
            NEWER=""
            $TAR $NEWER -cf $BACKUPDIR/$COMPUTER-$DM.tar $DIRECTORIES
    fi
    
    # Weekly full backup
    if [ $DOW = "Sun" ]; then
            NEWER=""
            NOW=`date +%d-%b`
    
            # Update full backup date
            echo $NOW > $TIMEDIR/$COMPUTER-full-date
            $TAR $NEWER -cf $BACKUPDIR/$COMPUTER-$DOW.tar $DIRECTORIES
    
    # Make incremental backup - overwrite last weeks
    else
    
            # Get date of last full backup
            NEWER="--newer `cat $TIMEDIR/$COMPUTER-full-date`"
            $TAR $NEWER -cf $BACKUPDIR/$COMPUTER-$DOW.tar $DIRECTORIES
    fi
    Example 33-1. Backup directory of a week
    Here is an abbreviated look of the backup directory after one week:
    [root@deep] /# ls -l /backups/
    
    total 22217
    -rw-r--r--    1 root     root       10731288  Feb  7 11:24 deep-01Feb.tar
    -rw-r--r--    1 root     root             6879  Feb  7 11:24 deep-Fri.tar
    -rw-r--r--    1 root     root          2831  Feb  7 11:24 deep-Mon.tar
    -rw-r--r--    1 root     root             7924  Feb  7 11:25 deep-Sat.tar
    -rw-r--r--    1 root     root       11923013  Feb  7 11:24 deep-Sun.tar
    -rw-r--r--    1 root     root             5643  Feb  7 11:25 deep-Thu.tar
    -rw-r--r--    1 root     root             3152  Feb  7 11:25 deep-Tue.tar
    -rw-r--r--    1 root     root             4567  Feb  7 11:25 deep-Wed.tar
    drwxr-xr-x 2 root     root          1024  Feb  7 11:20 last-full
    Important: The directory where to store the backups BACKUPDIR, and the directory where to store time of full backup TIMEDIR must exist or be created before the use of the backup-script, or you will receive an error message.
  2. If you are not running this backup script from the beginning of the month 01-month-year, the incremental backups will need the time of the Sunday backup to be able to work properly. If you start in the middle of the week, you will need to create the time file in the TIMEDIR. To create the time file in the TIMEDIR directory, use the following command:
    [root@deep] /# date +%d%b < /backups/last-full/myserver-full-date
    Where /backups/last-full is our variable TIMEDIR wherein we want to store the time of the full backup, and myserver-full-date is the name of our server e.g. deep, and our time file consists of a single line with the present date i.e. 15-Feb.
  3. Make this script executable and change its default permissions to be writable only by the super-user root 755.
    [root@deep] /# chmod 755 /etc/cron.daily/backup.cron
Note: Because this script is in the /etc/cron.daily directory, it will be automatically run as a cron job at one o'clock in the morning every day.


Backup script for Linux using tar and find

SkyHi @ Thursday, February 11, 2010

Backup script for Linux using tar and find

Every Linux distribution provides a range of utilities that you can use to make backups of your files. Here is the how I get the job done with crontab and a shell script using tar and find

Goal

I wanted to secure all the data-files on my system on a regular basis. Regular for me implies on an automated basis. I've tried doing a manual backup every week or so but that caused too much hassle, so effectively I stopped making backups....
Further, I wanted to have an up-to-date backup of the most essential configuration settings on my system. This would help me in case I accidentally lost some datafiles or setting on my system. In case of losing everything I would need to reinstall my Linux distribution (plus extra installed software) and restore data files and settings. I decided that a full backup of the whole system wouldn't be worth the effort (and resources!).

Choice of hardware

Say "backup" and most Unix people think "tapedrive". However, nowadays harddrives come that cheap that I chose to add an extra harddrive to my AMD 400 machine. This cheap option has the advantage that a harddrive can be mounted automatically, no need for manually inserting tapes. A disadvantage is that the backup resides in the same physical unit as the very data it is supposed to secure. However, since I do have a CD-writer om my local network I still have the option to copy a backup to a CD once in a while.
My main HD is 6Mb. The backup HD has 10Mb.

Script

After adding the drive to my machine I wrote a little shell script (for bash) that basically does the following:
  • it mounts my backupdrive
  • it checks the date
  • every sunday it makes a full backup of some datafiles and some configuration settings, older incremental backups are removed. other days it backups files that have been accessed the last day
  • it dumps all the contents of a mysql database to the backup drive and zips the file
  • it unmounts the backup drive
This script (I've stored it in /root/scripts) is called every night at 3:01 AM by cron. The crontab file looks like:
1 3 * * * /root/scripts/daily_backup
Add this line using contab -e when root.

Code

Here's the actual code:
#!/bin/bash
#
# creates backups of essential files
#
DATA="/home /root /usr/local/httpd"
CONFIG="/etc /var/lib /var/named"
LIST="/tmp/backlist_$$.txt"
#
mount /mnt/backup
set $(date)
#
if test "$1" = "Sun" ; then
        # weekly a full backup of all data and config. settings:
        #
        tar cfz "/mnt/backup/data/data_full_$6-$2-$3.tgz" $DATA
        rm -f /mnt/backup/data/data_diff*
        #
        tar cfz "/mnt/backup/config/config_full_$6-$2-$3.tgz" $CONFIG
        rm -f /mnt/backup/config/config_diff*
else
        # incremental backup:
        #
        find $DATA -depth -type f \( -ctime -1 -o -mtime -1 \) -print > $LIST
        tar cfzT "/mnt/backup/data/data_diff_$6-$2-$3.tgz" "$LIST"
        rm -f "$LIST"
        #
        find $CONFIG -depth -type f  \( -ctime -1 -o -mtime -1 \) -print > $LIST
        tar cfzT "/mnt/backup/config/config_diff_$6-$2-$3.tgz" "$LIST"
        rm -f "$LIST"
fi
#
# create sql dump of databases:
mysqldump -u root --password=mypass --opt mydb > "/mnt/backup/database/mydb_$6-$2-$3.sql"
gzip "/mnt/backup/database/mydb_$6-$2-$3.sql"
#
umount /mnt/backup

Discussion

data files:
All my data files are in /root, /home or /usr/local/httpd.

settings:
I chose to backup all the setting in /etc (where most essential settings are stored), /var/named (nameserver settings) and /var/lib (not sure about the importance of this one...). I might need to add more to the list but I still far from being a Unix-guru ;-). All suggestions are welcome!

tar versus cpio
The first version of this script used cpio to create backups iso tar. However, I found the cpio format not very handy for restoring single files so I chang ed it to tar. A disadvantage of using tar is that you can't (as far as I know) simply pipe the output of a find to it.
Using a construct like tar cvfz archive.tgz `find /home -ctime -1 -depth -print` caused errors for files that contained a space " " character. This problem was solved by wring the output of find to a file first (and using tar with the -T option).

REFERENCE


Example Full/Incremental Script

SkyHi @ Thursday, February 11, 2010
  • Typical list of directories you WANT to backup regularly...
    • DIRS is typically: /root /etc /home/

  • Using Month/Date for Backup Filenames
  • Simple Full Backup
    • tar zcvf /Backup_Dir/Month_Date.Full.tgz $DIRS

  • Simple Incremental Backups
    • Change the Month_Date.tgz file
    • Backup the files to a Different Server:/Disks [ reasons ]

    • Simple Daily Incremental Backup
      find $DIRS -mtime -1 -type f -print | tar zcvf /BackupDir_1/Month_Date.tgz -T -

    • Simple Weekly Incremental Backup
      find $DIRS -mtime -7 -type f -print | tar zcvf /BackupDir_7/Month_Date.7.tgz -T -
        use -mtime -32 to cover for un-noticed failed incremental backups from last week

    • Simple Monthly Incremental Backup
      find $DIRS -mtime -31 -type f -print | tar zcvf /BackupDir_30/Year_Month.30.tgz -T -
        use -mtime -93 to cover for un-noticed failed incremental backups from last month

  • The MAJOR problem with "daily" incremental backups methodology"
    • If one of the ( todays ) daily backup files was bad, for any reason
      • than you dont have backup for "today"
      • all subsequent ( tommorrow, next day ) backups are basically unsusable
      • you cannot restore your system from backups -- you are missing files from "today"

    • If one of the weekly backup files was bad, for any reason
      than all subsequent backups are basically unsusable, till the next 30 day incremental backup

  • The solution is to always perform incremental backup since the last full backup
    • in the option -mtime $Cnt the counter would increment daily

    • incremental backups should always start from the last "FULL" backup and changes till today


  • REFERENCE
    http://www.linux-backup.net/Full_Inc/

    Wednesday, February 10, 2010

    America's 10 most wanted botnets

    SkyHi @ Wednesday, February 10, 2010
    Botnet attacks are increasing, as cybercrime gangs use compromised computers to send spam, steal personal data, perpetrate click fraud and clobber Web sites in denial-of-service attacks. Here's a list of America's 10 most wanted botnets, based on an estimate by security firm Damballa of botnet size and activity in the United States.

    Slideshow: 11 security companies to watch

    No. 1: Zeus

    Compromised U.S. computers: 3.6 million

    Main crime use: The Zeus Trojan uses key-logging techniques to steal sensitive data such as user names, passwords, account numbers and credit card numbers. It injects fake HTML forms into online banking login pages to steal user data.

    No. 2: Koobface

    Compromised U.S. computers: 2.9 million

    Main crime use: This malware spreads via social networking sites MySpace and Facebook with faked messages or comments from "friends." When a user is enticed into clicking on a provided link to view a video, the user is prompted to obtain a necessary update, like a codec -- but it's really malware that can take control over the computer.

    No. 3: TidServ

    Compromised U.S. computers: 1.5 million

    Main crime use: This downloader Trojan spreads through spam e-mail, arriving as an attachment. It uses rootkit techniques to run inside common Windows services (sometimes bundled with fake antivirus software) or in Windows safe mode, and it can hide most of its files and registry entries.

    No. 4: Trojan.Fakeavalert

    Compromised U.S. computers: 1.4 million

    Main crime use: Formerly used for spamming, this botnet has shifted to downloading other malware, with its main focus on fake alerts and rogue antivirus software.

    No. 5: TR/Dldr.Agent.JKH

    Compromised U.S. computers: 1.2 million

    Main crime use: This remote Trojan posts encrypted data back to its command-and-control domains and periodically receives instruction. Often loaded by other malware, TR/Dldr.Agent.JKH currently is used as a clickbot, generating ad revenue for the botmaster through constant ad-specific activity.

    No. 6: Monkif

    Compromised U.S. computers: 520,000

    Main crime use: This crimeware's current focus is downloading an adware BHO (browser helper object) onto a compromised system.

    No. 7: Hamweq

    Compromised U.S. computers: 480,000

    Main crime use: Also known as IRCBrute, or an autorun worm, this backdoor worm makes copies of itself on the system and any removable drive it finds -- and anytime the removable drives are accessed, it executes automatically. An effective spreading mechanism, Hamweq creates registry entries to enable its automatic execution at every startup and injects itself into Explorer.exe. The botmaster using it can execute commands on and receive information from the compromised system.

    No. 8: Swizzor

    Compromised U.S. computers: 370,000

    Main crime use: A variant of the Lop malware, this Trojan dropper can download and launch files from the Internet on the victim's machine without the user's knowledge, installing an adware program and other Trojans.

    No. 9: Gammima

    Main crime use: Also know as Gamina, Gamania, Frethog, Vaklik and Krap, this crimeware focuses on stealing online game logins, passwords and account information. It uses rootkit techniques to load into the address space of other common processes, such as Explorer.exe, and will spread through removable media such as USB keys. It's also known to be the worm that got into the International Space Station in the summer of 2008.

    No. 10: Conficker

    Compromised U.S. computers: 210,000

    Main crime use: Also called Downadup, this downloader worm has spread significantly throughout the world, though not so much in the U.S. It's a complex downloader used to propagate other malware. Though it has been used to sell fake antivirus software, this crimeware currently seems to have no real purpose other than to spread. Industry watchers fear a more dangerous purpose will emerge.


    REFERENCE

    http://www.networkworld.com/news/2009/072209-botnets.html?page=2



    KVM VPS vs OpenVZ/Virtuozzo vs XEN VPS comparative chart.

    SkyHi @ Wednesday, February 10, 2010

    PerfoHost Virtual Private Servers powered by Kernel Based Virtual Machine

    Quote from linux-kvm.org: "KVM (for Kernel-based Virtual Machine) is a full virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V). It consists of a loadable kernel module, kvm.ko, that provides the core virtualization infrastructure and a processor specific module, kvm-intel.ko or kvm-amd.ko. KVM also requires a modified QEMU although work is underway to get the required changes upstream.

    Using KVM, one can run multiple virtual machines running unmodified Linux or Windows images. Each virtual machine has private virtualized hardware: a network card, disk, etc."

     

    Main features of KVM VPS are:

    • RAM and disk are not shared with other virtual machines on the main server. This makes your KVM VPS even more stable and excludes a possibility of overselling.

    • KVM VPS allows you to choose custom kernels as well as custom kernel modules for your KVM VPS guest OS.

    • You may set up a VPN server in KVM VPS (PPTP, OpenVPN, IPSec).

    • Full guest OS support. Allows you to install Windows, Linux, BSD, OpenSolaris, etc.

    • Wirth KVM VPS you may run a Window manger such as Gnome or KDE and and iteract with it using VNC.


    KVM VPS vs OpenVZ/Virtuozzo vs Xen

      KVM VPS OpenVZ / Virtuozzo Xen
    Dedicated filesystem + limited support +
    Dedicated RAM + +
    Dedicated server like isolation + +
    PPTP VPN + +
    OpenVPN + +
    IPSec VPN + limited support
    Firewall Configuration + limited support +
    NFS +
    Independent kernel + limited support
    Independent kernel modules + limited support
    Unlimited sokets and processes +
    Full guest OS support (Windows, Linux, BSD, OpenSolaris, etc.) + limited support
    Full window manager (GNOME, KDE, etc.) + limited support
    VNC connection + limited support



    KVM VPS with PerfoHost

    Performance & Flexibility & Security & Scalability

    Selecting the PerfoHost KVM VPS for your hosting platform gives you more of everything. As a scalable, managed solution, the KVM VPS decreases your time to market, allows you to focus on your core competencies, and frees up limited resources.

    With the KVM VPS you get a hosting environment with your own Unix or Windows virtual machine. Each VPS is a private and protected area that operates as an independent server.

    A VPS accommodates the present hosting needs of your business, but also provides an easy upgrade path to meet future requirements. Even if you already have a Web site, moving to a VPS can improve the online presence of your business by hosting your company e-mail or intranet.


    Unix KVM VPS plan Deposit Box KVM VPS Developer KVM VPS Enterprise KVM VPS Master KVM VPS
    Type virtual server virtual server virtual server virtual server
    Disk space, Mb 10000Mb 250000Mb 500000Mb 750000Mb
    Guaranteed RAM 256+ MB 512+ MB 768+ MB 1024+ MB
    Web sites/ Domains (yourname.com) 1 Unlimited Unlimited Unlimited
    SSH root access + + + +
    Static IP address (IPv4) + + + +
    Unlimited static IPv6 IP addresses + + + +
    Free increment (5 days) backups (one free restore per month) + + + +
    Monthly data transfer, Gb Unlimited Unlimited Unlimited Unlimited
    Apache processes per day 150000 300000 450000 600000
    Web Visitors Unlimited Unlimited Unlimited Unlimited
    Cost per month $25.55 $45.55 $65.55 $85.55
    Cost per year (2 months free) $255.55 $455.55 $655.55 $855.55
      Order now! Order now! Order now! Order now!

    REFERENCE
    http://perfohost.com/kvm.vps.html