Wednesday, March 9, 2011

How to exclude a set of files from disk space usage calculation using du

SkyHi @ Wednesday, March 09, 2011
We have already discussed the most common options used with du - a tool to check/calculate file space usage. In its most basic form, du tells you the size of all the files in a given directory. Using -s option it sums up the total size of a directory, but what if we don’t want to include a set of files while calculating the utilized space. Using the --exclude and --exclude-from option, we can easily mention the files to be excluded.
Lets say we want to exclude all the *.png files while calculating the file space usage.
[shredder12]$ du -h --exclude=*.png
399M
[shredder12]$ du -sh
513M
[shredder12]$ du -ch *.png

115M total
The --exclude option takes a wildcard as input.
The -h option is to show the size in a human readable format rather than a huge figure in bytes.
The -c option is used to add an extra line showing the grand total of all the files.
If the files to be excluded don’t match any pattern then one can create a list of all the filenames and pass the file containing that list as argument to du’s --exclude-from option.
Say we have the list in the file named “list”.

REFERENCES
http://linuxers.org/article/how-exclude-set-files-disk-space-usage-calculation-using-du

Tuesday, March 8, 2011

Ubuntu unlock secured pdf

SkyHi @ Tuesday, March 08, 2011

How Do I Use evince To Remove The Password?

Open a pdf file using evince itself, enter:
evince input.pdf
Enter your password. Once opened click on File > Print > Select "Print to file" > Select "PDF" as output format and click on Print.
Fig.01: PDF file remove password with evince print option
Fig.01: PDF file remove password with evince print option
Writing a shell script left as an exercise to the readers.

REFERENCES
http://www.cyberciti.biz/faq/removing-password-from-pdf-on-linux/

Monday, March 7, 2011

An Nginx load balancing, caching, reverse proxy

SkyHi @ Monday, March 07, 2011
Continuing the evaluation of clustering our main website on Linux KVM virtual machines, below is our test nginx reverse proxy cache config.

The back end is 2 Apache servers, one on the local host and one on a remote host.

The main site files live on NFS. We set up the cache file system in tmpfs. Our site is dynamic using PHP for most pages. The caching appears to eliminate most of the PHP overhead and keeps the number of Apache processes at bay, reducing overall RAM needed on the VM's. We think this configuration will allow us to use 2 VM's and still end up with fewer resources needed than our old dual CPU 3GB RAM physical web server.

Our nginx disk cache file system was set to 50 Meg for testing.

/etc/fstab:

tmpfs /var/lib/nginx tmpfs size=50M,uid=33 0 0

We used the main nginx config file to define default caching parameters.

/etc/nginx/nginx.conf:

# Two processes work well for a single CPU
  user www-data;
  worker_processes  2;

  error_log  /var/log/nginx/error.log;
  pid        /var/run/nginx.pid;

  events {
     worker_connections  1024;
     use epoll;
  }

  http {
    include       /etc/nginx/mime.types;

    # Nginx does the logging
    access_log /var/log/nginx/access.log;

    sendfile        on;
    #tcp_nopush     on;

    #keepalive_timeout  0;
    keepalive_timeout  65;
    tcp_nodelay        on;

    server_names_hash_bucket_size 64;

    # Have nginx do the compression, turn off Apache's mod_deflate
    gzip                on;
    gzip_comp_level     1;
    gzip_disable        msie6;
    gzip_proxied        any;

    # text/html mime type is automatically included for gzip, have to add the rest
    gzip_types          text/plain text/css application/x-javascript text/xml application/xml application/rss+xml text/javascript;

    # Default cache parameters for use by virtual hosts
    # Set the cache path to tmpfs mounted disk, and the zone name
    # Set the maximum size of the on disk cache to less than the tmpfs file system size
    proxy_cache_path  /var/lib/nginx/cache  levels=1:2  keys_zone=adams:10m max_size=45m;
    proxy_temp_path   /var/lib/nginx/proxy;

    # Putting the host name in the cache key allows different virtual hosts to share the same cache zone
    proxy_cache_key "$scheme://$host$request_uri";
    proxy_redirect off;

    # Pass some client identification headers back to Apache  
    proxy_set_header        Host            $host;
    proxy_set_header        X-Real-IP       $remote_addr;
    proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;

    # Cache different return codes for different lengths of time 
    # We cached normal pages for 10 minutes
    proxy_cache_valid 200 302  10m;
    proxy_cache_valid 404      1m;

    include /etc/nginx/conf.d/*.conf;
    include /etc/nginx/sites-enabled/*;
  }


Following is an nginx virtual host config with the reverse proxy cache enabled. The host name of the server is megalon-test, the website virtual host name is adams-bal. The proxy also handles SSL, and can selectively decide what to cache and for how long.

/etc/nginx/sites-enabled/adams-bal:
# The Apache load balancing proxy targets for port 80 traffic
  upstream adams-bal {
    server 127.0.0.1;
    server 192.156.134.101;
  }
  # Proxies for port 443 traffic
  upstream sadams-bal {
    server 127.0.0.1:443;
    server 192.156.134.101:443;
  }

  # Virtual host definition 
  server {
    listen megalon-test.adams.edu:80;
    server_name adams-bal.adams.edu;
    error_page 404 = /about/searchasc/notfound.php;

    # The default location definition, we do some rewrites via nginx as well
    location / {
      # We do some rewrites via nginx as well
      include adams-rewrite;

      # Do caching using the adams zone, with the settings defined in /etc/nginx.conf   
      proxy_cache adams;

      # If it's not in the cache pass back to the adams-bal load balanced targets defined above 
      proxy_pass  http://adams-bal$request_uri;
    }

    # Serve static files directly via nginx, set an expires header for the browser 
    location ~* \.(pdf|css|js|png|gif|jpg|ico|swf|mov|doc|pdf|xls|ppt|docx|pptx|xlsx)$ {
      root /home/www/adams;
      expires max;
    }
  }

  # The SSL virtual host definition, setting up the SSL proxy end to middle to end.
  # client -> SSL -> nginx -> SSL -> Apache
  server {
    listen megalon-test.adams.edu:443;
    server_name adams-bal.adams.edu;

    # We use a domain wild card cert, under nginx the intermediate lives
    # in the same file as the domain cert
    ssl  on;
    ssl_certificate  adams.edu_wildcard_chain.crt;
    ssl_certificate_key  adams.edu_wildcard.key;
    ssl_session_timeout  5m;
    ssl_protocols  SSLv2 SSLv3 TLSv1;
    ssl_ciphers  ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP;
    ssl_prefer_server_ciphers   on;

    # Don't cache ssl pages, just proxy them
    location / {
      proxy_pass  https://sadams-bal$request_uri;
    }
  }


The Apache back end configuration on Ubuntu uses a couple of files, here are configs for the localhost server, running on the same VM as nginx.

/etc/apache2/ports.conf:
# Set up name virtual host for localhost port 80
  NameVirtualHost 127.0.0.1:80
  Listen 127.0.0.1:80

  # Set up server for SSL
  
  # SSL name based virtual hosts are not yet supported, therefore no
  # NameVirtualHost statement here

    # SSL for adams-bal
    Listen 127.0.0.1:443

    # SSL for another virtual host through nginx
    Listen 127.0.0.2:443
  


The Apache virtual host definition.

/etc/apache2/sites-enabled/adams-bal:
# For port 80, server name the same as nginx
  
    DocumentRoot /home/www/adams
    ServerName adams-bal.adams.edu
  

  
    Options -Indexes FollowSymLinks
    AllowOverride AuthConfig Limit 
    
      Order allow,deny
      Allow from all
     
  

  # For SSL, notice Apache requires the intermediate cert in a separate file 
  
    DocumentRoot /home/www/adams
    ServerName adams-bal.adams.edu
    SSLEngine on
    SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP
    SSLCertificateFile /etc/nginx/adams.edu_wildcard.crt
    SSLCertificateKeyFile /etc/nginx/adams.edu_wildcard.key
    SSLCertificateChainFile /etc/nginx/GoDaddy_Intermediate.crt
 


The Apache configs on the remote host are almost identical, using the actual server IP address rather than localhost. One can envision a swarm of VM's talking to the proxy through a virtual network on private IP's...

The performance and resource usage is very good due to the ram disk caching. We did notice a slow down when the nginx cache filled during a site crawl from a search engine, so make sure there is enough cache to cover the site. Some sites may need a mix of on disk and ram disk caches.


REFERENCES:
http://staff.adams.edu/~cdmiller/posts/nginx-reverse-proxy-cache/

load balancer http and https with Nginx

SkyHi @ Monday, March 07, 2011
So lets say that you want to make your own simple load balancer, to round robin requests between multiple servers; those servers will be running their own web server software which will be doing a reverse proxy to load balance rails requests amongst mongrel, thin, ebb etc. (if its a rails app).
The requirements are that the main load balancer needs to handle HTTP and HTTPS requests and it should automatically add or remove cluster nodes from the server pool as they become available, or unavailable. We can accomplish this by using the Nginx webserver.
We setup Nginx as you normally would perhaps using 6 or 7 workers, and you can setup the load balancing in the vhost config file. Instead of creating server pools of backend mongrels as you may have seen before, just make the items in the pool each point to a different server. In this example we actually store the IP locations in our /etc/hosts file.


upstream backend {
  server web1:80;
  server web2:80;
  server web3:80;
  server web4:80;
  server web5:80;
}

upstream secure {
  server web1:443;
  server web2:443;
  server web3:443;
  server web4:443;
  server web5:443;
}



server {

  listen 80;

  server_name www.domain.com domain.com;

  location / {

    # needed to forward user's IP address to rails
    proxy_set_header  X-Real-IP  $remote_addr;

    # needed for HTTPS
    proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header Host $http_host;
    proxy_redirect false;
    proxy_max_temp_file_size 0;

    proxy_pass http://backend;

  } #end location

} #end server



server {

  listen 443;

  ssl on;
  ssl_certificate /etc/ssl/ssl.pem/www.domain.com.pem;
  ssl_certificate_key /etc/ssl/ssl.key/www.domain.com.key;

  server_name www.domain.com domain.com;

  location / {

    # needed to forward user's IP address to rails
    proxy_set_header  X-Real-IP  $remote_addr;

    # needed for HTTPS
    #proxy_set_header X_FORWARDED_PROTO https;

    proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header Host $http_host;
    proxy_redirect false;
    proxy_max_temp_file_size 0;

    proxy_pass https://secure;

  } #end location


} #end server

Thats all there is to it, now this box will act as a front end load balancer in front of a number of other servers, adding a server is just 1 entry in each of the pools.

Note, be sure your log levels for Nginx are set to error only, since this Nginx instance will be handling a lot of traffic the logs can fill up really fast.


REFERENCES
http://parkersmithsoftware.com/blog/post/14-create-a-simple-load-balancer-with-nginx
http://serverfault.com/questions/10854/nginx-https-serving-with-same-config-as-http

Using Nginx as a load balancer

SkyHi @ Monday, March 07, 2011
Here’s a look at how nginx does basic load balancing :
upstream  yoursite  {
   server   yoursite1.yoursite.com;
   server   yoursite2.yoursite.com;
}

server {
   server_name www.yoursite.com;
   location / {
      proxy_pass  http://yoursite;
   }
}
This configuration will send 50% of the requests for www.yoursite.com to yoursite1.yoursite.com and the other 50% to yoursite2.yoursite.com.

ip_hash

You can specify the ip_hash directive that guarantees the client request will always be transferred to the same server.
If this server is considered inoperative, then the request of this client will be transferred to another server.
upstream  yoursite  {
   ip_hash;
   server   yoursite1.yoursite.com;
   server   yoursite2.yoursite.com;
}

down

If one of the servers must be removed for some time, you must mark that server as down.
upstream  yoursite  {
   ip_hash;
   server   yoursite1.yoursite.com down;
   server   yoursite2.yoursite.com;
}

weight

If you add a weight tag onto the end of the server definition you can modify the percentages of the requests send to the servers.
When there’s no weight set, the weight is equal to one.
upstream  yoursite  {
   server   yoursite1.yoursite.com weight=4;
   server   yoursite2.yoursite.com;
}
This configuration will send 80% of the requests to yoursite1.yoursite.com and the other 20% to yoursite2.yoursite.com.
note: It’s not possible to combine ip_hash and weight directives.

max_fails and fail_timeout

max_fails is a directive defining the number of unsuccessful attempts in the time period defined by fail_timeout before the server is considered inoperative. If not set, the number of attempts is one. A value of 0 turns off this check.
If fail_timeout is not set the time is 10 seconds.
upstream  yoursite  {
   server   yoursite1.yoursite.com;
   server   yoursite2.yoursite.com max_fails=3  fail_timeout=30s;
}
In this configuration nginx will consider yoursite2.yoursite.com as inoperative if a request fails 3 times with a 30s timeout.

backup

If the non-backup servers are all down or busy, the server(s) with the backup directive will be used.
upstream  yoursite  {
   server   yoursite1.yoursite.com max_fails=3;
   server   yoursite2.yoursite.com max_fails=3;
   server   yoursite3.yoursite.com backup;
}
This configuration will send 50% of the requests for www.yoursite.com to yoursite1.yoursite.com and the other 50% to yoursite2.yoursite.com.
If yoursite1.yoursite.com and yoursite2.yoursite.com both fails 3 times the requests will be send to yoursite3.yoursite.com.

REFERENCES
http://mickeyben.com/2009/12/30/using-nginx-as-a-load-balancer.html

CentOS / Redhat Linux: Install Keepalived To Provide IP Failover For Web Cluster

SkyHi @ Monday, March 07, 2011
Keepalived provides a strong and robust health checking for LVS clusters. It implements a framework of health checking on multiple layers for server failover, and VRRPv2 stack to handle director failover. How do I install and configure Keepalived for reverse proxy server such as nginx or lighttpd?

If your are using a LVS director to loadbalance a server pool in a production environment, you may want to have a robust solution for healthcheck & failover. This will also work with reverse proxy server such as nginx.

Our Sample Setup

Internet--
         |
    =============
    | ISP Router|
    =============
         |
         |
         |      |eth0 -> 192.168.1.11 (connected to lan)
         |-lb0==|
         |      |eth1 -> 202.54.1.1 (vip master)
         |
         |      |eth0 -> 192.168.1.10 (connected to lan)
         |-lb1==|
                |eth1 -> 202.54.1.1 (vip backup)
Where,
  • lb0 - Linux box directly connected to the Internet via eth1. This is master load balancer.
  • lb1 - Linux box directly connected to the Internet via eth1. This is backup load balancer. This will become active if master networking failed.
  • 202.54.1.1 - This ip moves between lb0 and lb1 server. It is called virtual IP address and it is managed by keepalived.
  • eth0 is connected to LAN and all other backend software such as Apache, MySQL and so on.
You need to install the following softwares on both lb0 and lb1:
  • keepalived for IP failover.
  • iptables to filter traffic
  • nginx or lighttpd revers proxy server.
DNS settings should be as follows:
  1. nixcraft.in - Our sample domain name.
  2. lb0.nixcraft.in - 202.54.1.11 (real ip assigned to eth1)
  3. lb1.nixcraft.in - 202.54.1.12 (real ip assigned to eth1)
  4. www.nixcraft.in - 202.54.1.1 (VIP for web server) do not assign this IP to any interface.

Install Keepalived

Visit keepalived.org to grab latest source code. You can use the wget command to download the same (you need to install keepalived on both lb0 and lb1):
# cd /opt
# wget http://www.keepalived.org/software/keepalived-1.1.19.tar.gz
# tar -zxvf keepalived-1.1.19.tar.gz
# cd keepalived-1.1.19

Install Kernel Headers

You need to install the following packages:
  1. Kernel-headers - includes the C header files that specify the interface between the Linux kernel and userspace libraries and programs. The header files define structures and constants that are needed for building most standard programs and are also needed for rebuilding the glibc package.
  2. kernel-devel - this package provides kernel headers and makefiles sufficient to build modules against the kernel package.
Make sure kernel-headers and kernel-devel packages are installed. If not type the following install the same:
# yum -y install kernel-headers kernel-devel

Compile keepalived

Type the following command:
# ./configure --with-kernel-dir=/lib/modules/$(uname -r)/build
Sample outputs:
checking for gcc... gcc
checking for C compiler default output file name... a.out
checking whether the C compiler works... yes
checking whether we are cross compiling... no
checking for suffix of executables...
checking for suffix of object files... o
...
.....
..
config.status: creating keepalived/check/Makefile
config.status: creating keepalived/libipvs-2.6/Makefile

Keepalived configuration
------------------------
Keepalived version       : 1.1.19
Compiler                 : gcc
Compiler flags           : -g -O2
Extra Lib                : -lpopt -lssl -lcrypto
Use IPVS Framework       : Yes
IPVS sync daemon support : Yes
Use VRRP Framework       : Yes
Use Debug flags          : No
Compile and install the same:
# make && make install

Create Required Softlinks

Type the following commands to create service and run it at RHEL / CentOS run level #3 :
# cd /etc/sysconfig
# ln -s /usr/local/etc/sysconfig/keepalived .
# cd /etc/rc3.d/
# ln -s /usr/local/etc/rc.d/init.d/keepalived S100keepalived
# cd /etc/init.d/
# ln -s /usr/local/etc/rc.d/init.d/keepalived .

Configuration

Your main configuration directory is located at /usr/local/etc/keepalived and configuration file name is keepalived.conf. First, make backup of existing configuration:
# cd /usr/local/etc/keepalived
# cp keepalived.conf keepalived.conf.bak

Edit keepalived.conf as follows on lb0:
vrrp_instance VI_1 {
        interface eth0
        state MASTER
        virtual_router_id 51
        priority 101
        authentication {
            auth_type PASS
            auth_pass Add-Your-Password-Here
        }
        virtual_ipaddress {
                202.54.1.1/29 dev eth1
        }
}
Edit keepalived.conf as follows on lb1 (note priority set to 100 i.e. backup load balancer):
vrrp_instance VI_1 {
        interface eth0
        state MASTER
        virtual_router_id 51
        priority 100
        authentication {
            auth_type PASS
            auth_pass Add-Your-Password-Here
        }
        virtual_ipaddress {
                202.54.1.1/29 dev eth1
        }
}
Save and close the file. Finally start keepalived on both lb0 and lb1 as follows:
# /etc/init.d/keepalived start

Verify: Keepalived Working Or Not

/var/log/messages will keep track of VIP:
# tail -f /var/log/messages
Sample outputs:
Feb 21 04:06:15 lb0 Keepalived_vrrp: Netlink reflector reports IP 202.54.1.1 added
Feb 21 04:06:20 lb0 Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth1 for 202.54.1.1
Verify that VIP assigned to eth1:
# ip addr show eth1
Sample outputs:
3: eth1:  mtu 1500 qdisc pfifo_fast qlen 10000
    link/ether 00:30:48:30:30:a3 brd ff:ff:ff:ff:ff:ff
    inet 202.54.1.11/29 brd 202.54.1.254 scope global eth1
    inet 202.54.1.1/29 scope global secondary eth1

ping failover test

Open UNIX / Linux / OS X desktop terminal and type the following command to ping to VIP:
# ping 202.54.1.1
Login to lb0 and halt the server or take down networking:
# halt
Within seconds VIP should move from lb0 to lb1 and you should not see any drops in ping. On lb1 you should get the following in /var/log/messages:
Feb 21 04:10:07 lb1 Keepalived_vrrp: VRRP_Instance(VI_1) forcing a new MASTER election
Feb 21 04:10:08 lb1 Keepalived_vrrp: VRRP_Instance(VI_1) Transition to MASTER STATE
Feb 21 04:10:09 lb1 Keepalived_vrrp: VRRP_Instance(VI_1) Entering MASTER STATE
Feb 21 04:10:09 lb1 Keepalived_vrrp: VRRP_Instance(VI_1) setting protocol VIPs.
Feb 21 04:10:09 lb1 Keepalived_healthcheckers: Netlink reflector reports IP 202.54.1.1 added
Feb 21 04:10:09 lb1 Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth1 for 202.54.1.1

Conclusion

Your server is now configured with IP failover. However, you need to install and configure the following software in order to configure webserver and security:
  1. nginx or lighttpd
  2. iptables
Stay tuned, for more information on above configuration.
This FAQ entry is 1 of 7 in the "CentOS / RHEL nginx Reverse Proxy Tutorial" series. Keep reading the rest of the series:

REFERENCES
http://www.cyberciti.biz/faq/rhel-centos-fedora-keepalived-lvs-cluster-configuration/

Attachments not showing up correctly

SkyHi @ Monday, March 07, 2011
ERROR:

Now when I add attachments to new emails that I'm creating (using Word as email editor), the attachments show up as icons in the
body of the message. When I send these to my wife at her office (she is not using Outlook--some other non-Microsoft email program),
she does not see the attachment in the message body. But she does see one of those "winmail.dat" attachments (I'm guessing
"winmail.dat" is because of my AutoArchive is turned on) in the message header that she can not open.

If I reply to one of her messages, then when I attach files they attach normally in the message header and my wife can open
attachments that way.

This is something new. It used to be every email I wrote added attachments in the message header and not in the body of the
message.

Is this a setting that I've changed? How can I get Outlook/Word to add attachments to the message header and not place them in the
message body?



SOLUTION:
You've changed your email format to Rich Text. (Whether intentionally or
unintentionally). Change it back to either Plain Text or HTML.....

Tools-Options-Mail Format


 This article describes how either an Exchange Server administrator or end users can prevent the Winmail.dat attachment from being sent to Internet users when using the Microsoft Exchange Internet Mail Connector (IMC).

When an end user sends mail to the Internet from an Exchange Windows or Outlook client, a file attachment called Winmail.dat may be automatically added to the end of the message if the recipient's client cannot receive messages in Rich Text Format (RTF). The Winmail.dat file contains Exchange Server RTF information for the message, and may appear to the recipient as a binary file. It is not useful to non-Exchange Server recipients.

=======================================================================

How to Prevent Winmail.dat Attachments from Being Sent in OutlookDo recipients of your emails, seemingly out of the blue, complain about a mysterious attachment called "winmail.dat" (of the even more mysterious content type "application/ms-tnef"), which they cannot open, no matter what they try? Do files you attach disappear in that winmail.dat moloch? Does winmail.dat show up for some but not all recipients of your messages?

When, How and Why Winmail.dat-Application/MS-Tnef is Created

It's Outlook's fault, in a way. Or the recipient's email client's. If Outlook sends a message using the RTF format (which is not very common outside Outlook) for bold text and other text enhancements, it includes the formatting commands in the winmail.dat file. Receiving email clients that do not understand the code therein display it as a stale attachment. To make matters worse, Outlook may also pack other, regular file attachments in the winmail.dat file.
Fortunately, you can get rid of winmail.dat altogether by making sure Outlook does not even try to send mail using RTF.

Prevent Winmail.dat Attachments from Being Sent in Outlook

To prevent Outlook from attaching winmail.dat when you send an email:
  • Select Tools | Options... from the menu.
  • Go to the Mail Format tab.
  • Under Compose in this message format:, make sure either HTML or Plain Text is selected.
  • Click Internet Format.
  • Make sure either Convert to Plain Text format or Convert to HTML format is selected under When sending Outlook Rich Text messages to Internet recipients, use this format:
  • Click OK.
  • Click OK again.

Disabe Winmail.dat Stubbornly Going to Particular Recipients No Matter the Default

The standard settings for outgoing mail formats in Outlook can be overridden per email address. So, on a per case basis — when somebody complains about an inexplicable "Winmail.dat" attachment after you have made all the right settings changes —, you may have to reset the format for individual addresses:
  • Search for the desired contact in your Outlook Contacts.
  • Double-click the contact's email address.
    • Alternatively, click on the desired email address with the right mouse button and select Outlook Properties... from the menu.
  • Make sure either Let Outlook decide the best sending format or Send Plain Text only is selected under Internet format:.
  • Click OK.

Extract Files from Winmail.dat without Outlook

If you receive winmail.dat attachments with embedded files, you can extract them using a winmail.dat decoder on Windows or Mac OS X.


REFERENCES
http://www.office-outlook.com/outlook-forum/index.php/m/43551/
http://support.microsoft.com/kb/138053 
http://email.about.com/od/outlooktips/qt/Prevent_Winmail_dat_Attachments_from_Being_Sent_in_Outlook.htm