Friday, September 3, 2010

SSL in Plain English

SkyHi @ Friday, September 03, 2010

I recently purchased and installed a new SSL certificate from GoDaddy for Marrily. During the process, I came to learn more about SSL and the different steps to set everything up from scratch. There are an abundant amount of articles and tutorials on how you can get started, but surprisingly there are no articles on “why” you have to follow those steps. Truth is I was pretty confused when I first started. There were a bunch of different steps and different key, pem, crt, csr files that need to be generated. The result was that I got lost and screwed up during the process. I then add insults to injury by accidentally revoking my certificate instead of re-keying it and ended up having to call GoDaddy to revert the deletion. Since any entrepreneur with a SaaS website will eventually need to implement SSL to protect their customers, having a better understanding of SSL will be greatly beneficial. This is my explanation to the entire process in plain English in hope that I can help clear up the confusion.


Why SSL?


To protect the communications between your web server and the client’s browser, you need to implement an encrypted channel so that all data transferred back and forth can only be read by your server and the browser. Anyone who eavesdrops in between will just see gibberish. Only your web server and the client’s browser know the right “secrets” to unlock the encrypted message. This communication protocol is called https, with the s stands for “secured”.


When user requests a page via https, your server will need to encrypt the content using a secret which the user’s browser can decrypt using a well-known identity. If somehow the content is encrypted with an unknown identity, the browser will be very hesitant to accept it, and it will ask user to make the hard decision to proceed or not.


Why Purchase a SSL Certificate?


To purchase a SSL certificate is to obtain a publicly verifiable identity for your domain that is accepted in all browsers. Most modern browsers include a list of well-known root Certificate Authority (CA) public keys, and any encryption done using these CA sources will be accepted by the browser. It is also possible for you to generate a root Certificate Authority set of key as well, technically speaking you become your own Certificate Authority. However, since your identity is unknown and not verifiable, the browser will not trust your keys and thus it will pop up an alert to notify the user. Nonetheless, once you add your certificate key to your browser’s list of accepted certificates, it will come to know about your identity and hence it won’t bother popping up anymore.


Since you can’t ask everyone to manually install your public key to their browser’s list of accepted certificates, you will need to buy the certificate from an established vendor whose public key already came bundled by default in the browser. I read somewhere that this is how browser vendors can make some money, e.g. the SSL guys will need to pay to have their identity (the public key) included in the browser. In exchange, these SSL vendors can turn around and certify (or “sign”) anyone who wants to get a SSL certificate for a fee.


If you think about becoming a SSL vendor, you will need to convince all other browsers that you’re completely trustworthy, and you protect your private key used to generate the SSL certificate with your life, since whoever gets their hands on your private key will be able to sign any SSL request, thus compromising your identity as the reputable Certificate Authority. All SSL vendors offer a warranty on their SSL certificate service from $1,000 to $10,000 to a lot more specifically as a statement that they keep their secret hidden really well to protect the identity of their customers’ SSL certificates.


Obtaining a SSL Certificate


Step 1: Generate your private key


To handle https requests, your web server will need to encrypt the data. Hence the first step you need to do is to generate a private key that will be used for the encryption. You can use different encryption algorithms but a SSL vendor can ask you to use a specific method and key length. The longer the key, the better the encryption strength. If the key is too short, the bad guy can quickly run through all the possibilities and found out your private key, then he can pretend to be you. In my case, GoDaddy want to have 2048 bits (256 bytes) for the strength for the private key. For personal use, a key strenght of 1024 bits (128 bytes) would be sufficient.


  1. openssl genrsa -out private.key 2048  
  2. Generating RSA private key, 2048 bit long modulus  
  3. ..............................+++  
  4. .+++  
  5. is 65537 (0x10001)  

Step 2: Generate a new SSL Request .csr file


The next step is to generate a “request” for a new SSL using your private key. This request file has an extension of .csr which stands for Certificate Signing Request, and it contains the identity about you (or your company), and most importantly, where the SSL certificate would be valid for: a single domain (cheapest) or any sub-domains (a.k.a. wildcard, and a bit more pricey). All these information will be encrypted using your private key and saved to a file. The SSL Vendor will then take this file and sign it to produce a valid SSL certificate that can be applied to your server.


EV SSL

If you pay more money, you can also get your identity in the SSL certificate confirmed as a legitimate business entity. This type of SSL certificate is called EV SSL (Extended Validated Certificate). Essentially the SSL vendor will verify the identity of your company by asking you to submit your business registration paperwork, bank account, letter from attorney or accountant, etc., for an additional fee ($400 to $1,000). In return, you will have a green-bar status with your company’s name next to the browser’s address bar. The theory is that user can identify your company’s name, and thus feels more secured as he/she knows that the website is the correct one, not a phished site that just pretend to be your website. Most (if not all) banks and prominent businesses have this type of EV certificate to protect their identity.


To generate a new CSR from your private key, use the command:


$ openssl req -new -key private.key -out marrily.com.csr


As I mentioned, the most important bit of the CSR file is where the SSL Cert should be valid for, which is defined in the “Common Name” attribute. For single domain (https://marrily.com, or https://www.marrily.com), you can use either “domain.com” or “www.domain.com”, since the “www” subdomain is so commonly used and thus can be omitted. Check out line 14 below for more details:


  1. $ openssl req -new -key private.key -out marrily.com.csr  
  2. You are about to be asked to enter information that will be incorporated  
  3. into your certificate request.  
  4. What you are about to enter is what is called a Distinguished Name or a DN.  
  5. There are quite a few fields but you can leave some blank  
  6. For some fields there will be a default value,  
  7. If you enter '.', the field will be left blank.  
  8. -----  
  9. Country Name (2 letter code) [AU]:US  
  10. State or Province Name (full name) [Some-State]:  
  11. Locality Name (eg, city) []:  
  12. Organization Name (eg, company) [Internet Widgits Pty Ltd]:Marrily  
  13. Organizational Unit Name (eg, section) []:  
  14. Common Name (eg, YOUR name) []:marrily.com  
  15. Email Address []:alexle@marrily.com  
  16.   
  17. Please enter the following 'extra' attributes  
  18. to be sent with your certificate request  
  19. A challenge password []:  
  20. An optional company name []:  

I did not specify any challenge password in this case to keep everything simple.


Step 3: Submit your CSR to get a SSL Cert


Now that you have the CSR file containing your identity and which domain the SSL would be valid for, you can submit this CSR file to the SSL vendor (of course you will have to pay them first). They will take your CSR file and generate a new .crt (certificate) file using their own private key. Essentially they “sign” your CSR file with their carefully guarded secret file. You will then get back the your .crt file corresponding to the CSR, and another .crt file that belongs to the SSL vendor.


Chances that the SSL Vendor’s crt file actually contains a list of different certificates (public keys). The reason is that more or less your SSL vendor is actually a re-seller of another Certificate Authority, which can also be a reseller of another higher-level CA. So the first certificate would belong to your immediate SSL vendor, the one after that belongs to the higher-level CA that signed your vendor’s cert, and the cert listed after that belongs to an even higher CA that signed the CA’s cert that signed your vendor’s cert which signed your own certificate. Essentially it’s a tree of certificates that lead all the way up to the highest level of CA, which is a root certificate that is included in the browsers by default. For GoDaddy, the root CA is www.valicert.com, and for VeriSign, it is VeriSign’s own Class 3 Public Primay Certification Authority - G5.




(notice the green bar, that’s the EV SSL which costs you some more money to obtain)


Step 4: Configure Your Web Server


Now you should have in your possession these files:


1) your private key

2) your .csr file (not used anymore)

3) your new SSL certificate provided by your vendor as a .crt file, which is valid for your domain.

4) your SSL vendor’s crt file, containing a list of different certificates.


You are now ready to go and configure the web server to use your private key and your new SSL certificate (which is technically a public key) for the https-enabled website. The specific configuration for each web server is different, but the process will be the same. Also, the .crt files sometimes have a “.pem” extension as well, but for simplicity’s sake, they can be used interchangeably.


Nginx and GoDaddy SSL


In my case, I used nginx to serve my Rails application. I originally installed this nginx instance from source using passenger’s installer but ssl was not enabled by default (you can check this by running “nginx -V” and look for - -with-http_ssl_module). I re-ran the passenger’s installer again and add the - -with-http_ssl_module switch to the optional parameters, and everything was good to go.


One gotcha for Nginx is that you will have to combine the 2 certs that GoDaddy give you into one .crt file, with your SSL certificate comes first, then GoDaddy’s crt file (gd_bundle.crt). The browser would understand this as your SSL was signed by the CA whose public key is next cert entry, then that one was signed by the one after it, etc. all the way to the root CA.




$ cat www.marrily.com.crt gd_bundle.crt > marrily_combined.crt


I then added a new server{} block to listen for ssl requests on port 443. After restarting Nginx, Marrily is now ssl-protected with a green padlock.


  1. server {  
  2.     listen          443;  
  3.     server_name     marrily.com;  
  4.     # passenger stuff  
  5.   
  6.     ssl on;  
  7.     ssl_certificate         /your/ssl/folder/marrily_combined.crt;  
  8.     ssl_certificate_key     /your/ssl/folder/private.key;  
  9. }  



Self-Signing your Certificate and Testing SSL Locally


Now that Marrily is https-enabled and some of the actions requires SSL, I wanted to develop the site locally using SSL as well to make sure all the logic worked correctly. I’d need to self-sign a new SSL certificate and have it installed locally.


Preparation

In my environment (Mac OS X Snow Leopard), I also have nginx installed using Homebrew. Homebrew installed nginx with ssl support by default so no recompilation was needed. I also added a new entry to my host file so that I can use a fake domain to access my local site, and I’d use this fake domain in my CSR as well.


  1. # /etc/hosts  
  2. 127.0.0.1 marrilydev.com  

Self-Signing a New Certificate

I generated a new private key using openssl:


$ openssl genrsa -out privatekey.pem 2048


Then I generated a CA cert using this private key:


  1. $ openssl req -new -x509 -key privatekey.pem -out cacert.pem -days 3650  
  2. You are about to be asked to enter information that will be incorporated  
  3. into your certificate request.  
  4. What you are about to enter is what is called a Distinguished Name or a DN.  
  5. There are quite a few fields but you can leave some blank  
  6. For some fields there will be a default value,  
  7. If you enter '.', the field will be left blank.  
  8. -----  
  9. Country Name (2 letter code) [AU]:  
  10. State or Province Name (full name) [Some-State]:  
  11. Locality Name (eg, city) []:  
  12. Organization Name (eg, company) [Internet Widgits Pty Ltd]:  
  13. Organizational Unit Name (eg, section) []:  
  14. Common Name (eg, YOUR name) []:marrilydev.com  
  15. Email Address []:  

I didn’t care about any of the details except for the Common Name field, which I specified the fake domain.


Since the cacert.pem file was generated (a.k.a. signed) using the same privatekey.pem file, we could use it as the SSL certificate directly. All we’d need to do is set the ssl_certificate_key setting in the configuration to the privatekey.pem file:


  1. upstream rails { server 127.0.0.1:3000; }  
  2.   
  3. server {  
  4.    listen       443;  
  5.    server_name  marrilydev.com;  
  6.   
  7.    ssl                  on;  
  8.    ssl_certificate      /Users/sr3d/projects/misc/ssl/cacert.pem;  
  9.    ssl_certificate_key  /Users/sr3d/projects/misc/ssl/privatekey.pem;  
  10.    ssl_session_timeout  5m;  
  11.   
  12.    server_name   marrilydev.com;  
  13.    access_log    /Users/sr3d/projects/marrily/svn/marrily_marrily/m3/app/log/access.log;  
  14.    error_log     /Users/sr3d/projects/marrily/svn/marrily_marrily/m3/app/log/error.log;  
  15.    root          /Users/sr3d/projects/marrily/svn/marrily_marrily/m3/app/public/;  
  16.   
  17.    location / {  
  18.      proxy_set_header  X-Real-IP  $remote_addr;  
  19.      proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;  
  20.      proxy_set_header Host $http_host;  
  21.      proxy_connect_timeout 74; # max is 75s  
  22.      proxy_redirect off;  
  23.  
  24.      # Proxy to Backend  
  25.      if (!-f $request_filename) {  
  26.         proxy_pass http://rails;  
  27.         break;  
  28.      }  
  29.    }  
  30. }  

(Note: locally I have the nginx proxy all traffic to the development server running on port 3000)


Also, since Mac OS X has special restrictions for port 80 and port 443, nginx must run with sudo to listen to port 443, otherwise it would silently fail and you won’t be able to hit the site via https.


Getting Rid of SSL Warnings by Installing the Self-Signed Cert

With nginx configured to listen to secured requests, I opened up the site in Chrome, and saw a huge red error message complaining about the validity of the certificate, since Chrome did not recognize the identity of the cacert.pem. Obviously I could just ignore the warning and proceed to the https site for the current session, but there’s a better solution: add the cacert.pem to the list of approved certificates.


To install the self-signed certificate, just double click on the cacert.pem file in Finder. The cert would be added automatically to Keychain Access.



With the cert added to Keychain, all browsers installed in the system would gladly accept a https connection to https://marrilydev.com.


Summary


  • SSL certificate is not all that confusing once you understand the gist of it and why each file is needed
  • The process in simple steps:
    • generate a new private key for encryption
    • Using this private key, generate a CSR containing the domain information for the SSL
    • submit the CSR file to the SSL vendor to obtain a new CRT certificate file
    • configure your web server to listen to 443 https traffic using the private key in step 1 and the CRT obtained from the vendor

  • GoDaddy SSL has different pricing on their SSL stuff, so search around and don’t pay a full price.
  • SSL is cheap, implement it to protect your customers and gain their trust
  • If you’re gziping your site, should add this line to your nginx’s conf file:

    gzip_buffers 16 8k; to make sure nginx doesn’t loose large gzipped JS or CSS

Reference


Thursday, September 2, 2010

CentOS - point-to-point VPN tunneling with OpenVPN

SkyHi @ Thursday, September 02, 2010
This tutorial will walk you through setting up a point-to-point VPN tunnel between your Cloud Servers. This type of connection will use the internal network interface (eth1) so you will not be charged for bandwidth. This walk-through is designed for CentOS.
The following items are assumed with this tutorial:
  • You have setup your server according to the setup guide
  • This server is brand new with no software installed
  • You are logged in as a non-privileged user with sudo privileges


Contents

[hide]

The Plan

Our initial design will consist of two different servers -- we will call them ServerA and ServerB. The IP addresses for each server are defined below:
ServerA ServerB
10.100.1.20 10.100.1.50
(Note that we are using the internal interface only)

The plan is to create a point-to-point VPN between ServerA and ServerB so they can communicate on their own private network. The following processes will walk you through creating three different types of VPN connections:
  • Simple VPN (no security or encryption)
  • Static Key VPN (simply 128-bit security)
  • Full TLS VPN (revolving-key encryption)
We will build each type of VPN tunnel and then build on the one previously. For instance, if you would like a full TLS-enabled VPN please run through all of the examples shown below.

Simple VPN

The first VPN link that we will create is a simple point-to-point link with no encryption or security. This will literally form a virtual link between two servers for communication. This is the simplest form of VPN communication and is generally not recommended. The process will be the same for each server with server specific changes being noted.

Update Your System

First we need to make sure that our system is up to date. Run the following command to update your system:
# sudo yum -u update

Add the DAG repository

By default OpenVPN does not come as a pre-compiled binary; however, there are places where people have pre-compiled it for us. We will use the DAG repository which houses one of those pre-compiled versions but first we need to tell our server where it is located.
Let's add the repository by adding an entry into YUM, the default package manager for CentOS.
# sudo nano -w /etc/yum.repos.d/dag.repo
You will need to add the following lines into this file.
[dag]
name=Dag RPM Repository for Red Hat Enterprise Linux
baseurl=http://apt.sw.be/redhat/el$releasever/en/$basearch/dag
gpgcheck=1
enabled=1
Once you have pasted the lines press CTRL-X on your keyboard to exit the application. You will be asked if you would like to save the file, simply press Y and then press Enter to accept the default file name. The program will now exit.
Next we need to add the GPG key that signs each of the packages in the DAG repository but first we need to download it.
# wget http://dag.wieers.com/packages/RPM-GPG-KEY.dag.txt
In this file there are a few lines that need to be removed before we can import it otherwise an error will result. Type the following line to open the editor.
# nano RPM-GPG-KEY.dag.txt
In this screen you will see several lines and a bunch of random letters and numbers. Delete the following lines:
The following public key can be used to verify RPM packages
downloaded from  http://dag.wieers.com/apt/  using 'rpm -K'
if you have the GNU GPG package.
Questions about this key should be sent to:
Dag Wieers 
Once you have those lines deleted simply press CTRL-X to exit the program. It will prompt you to save the file, press Y and Enter.
Now we need to import the GPG key that we just modified otherwise the installation will fail. Type the following command:
# sudo rpm --import RPM-GPG-KEY.dag.txt

Install OpenVPN

We are now ready to install OpenVPN on our server. Type the following command to install OpenVPN:
# sudo yum -y install openvpn

Remove DAG repository

Now that we have added our software we need to remove the DAG repository to protect the integrity of your updates. Run the following command below:
# sudo rm /etc/yum.repos.d/dag.repo

Create Client Server

At this point please proceed with performing the above actions on your second server. In our example we will perform the above actions on ServerB.

Create VPN Link

Now we are ready to create our VPN link between ServerA and ServerB.

ServerA Commands

To create the link on ServerA run the following command:
# sudo /usr/sbin/openvpn --remote 10.100.1.50 --dev tun1 --ifconfig 172.16.1.1 172.16.1.2
This command will create a VPN link with ServerB (10.100.1.50). It will also prepare a virtual interface called tun1 and will assign the IP 172.16.1.1 to it. The associated routes for this will be created as well.

ServerB Commands

To create the link on ServerB run the following command:
# sudo /usr/sbin/openvpn --remote 10.100.1.20 --dev tun1 --ifconfig 172.16.1.2 172.16.1.1
This command will create a VPN link with ServerA (10.100.1.20). It will also prepare a virtual interface called tun1 and will assign the IP 172.16.1.2 to it. The associated routes for this will be created as well.

Test VPN Link

Once you have executed the commands above on each server then the VPN link will be setup. Keep in mind that this is a clear text link and all traffic can be seen. You will see the following warning as the VPN link is established:
******* WARNING *******: all encryption and authentication features 
disabled -- all data will be tunnelled as cleartext
If the link has been established successfully you will see the following on each server:
Wed Aug  5 16:59:59 2009 Peer Connection Initiated with 10.100.1.50:1194
Wed Aug  5 17:00:01 2009 Initialization Sequence Completed
Note that the IP will vary depending on your setup
Open up two more connections to your servers via SSH and perform a ping test from each. In our test environment we will perform a test on ServerA and we will ping ServerB but we will use the VPN tunnel instead. To force traffic over the VPN tunnel simply ping the VPN IP for ServerB which is 172.16.1.2.
# ping 172.16.1.2
PING 172.16.1.2 (172.16.1.2) 56(84) bytes of data.
64 bytes from 172.16.1.2: icmp_seq=1 ttl=64 time=4.00 ms
64 bytes from 172.16.1.2: icmp_seq=2 ttl=64 time=0.000 ms
64 bytes from 172.16.1.2: icmp_seq=3 ttl=64 time=0.000 ms
Do the same thing on ServerB. You should see similar results. If you see Request Timed Out then your VPN link might not be established. Please check your IP addresses and attempt to establish the link again.
To finish your testing and close the link simply press Control-C on each server to close the VPN link.

Static Key VPN

Now that we have an established VPN link it is time to secure it a little bit. In this step we will create a 128-bit security key that will be stored on each server and used to encrypt our traffic over the VPN tunnel.

Creating the Key

Creating the VPN key is surprisingly simple. You will need to create the key on one server and then copy it to the other server. In our example we will use ServerA to create the key and use SCP to copy it to ServerB.
For this part of the setup we will need to change to super user mode. Type the following command:
# su
Enter your root password when prompted.
Now we need to go to the directory where we will store our static key.
# cd /usr/share/doc/openvpn-2.0.9/
Once there we will need to produce our static key. To produce it type the command below to create a static key file called key.
# openvpn --genkey --secret key
If you perform a directory listing (ls) you will see a file called key in the directory. We will use this when starting our VPN connection.
For now we will remain as the super user for the remainder of this article.

Copy the Key

We need to copy our static key over to ServerB so they are using the same credentials. If you do not perform this step then your VPN link will fail to establish. To copy the key we will use the SCP (Secure Copy) command to copy the file over SSH. Run the following command below to copy the key file over. Note that you will need to change the IP address in the example below to match your second server.
# scp key root@10.100.1.20:/usr/share/doc/openvpn-2.0.9/
The first prompt you will receive is asking you to accept the SSH fingerprint key... simply type yes and press Enter. You will then be prompted for your root password -- enter it here.
If the copy was successful you should see something like this:
key                                           100%  636     0.6KB/s   00:00

Creating the VPN link

Now that we have our key created the key and copied it to ServerB it is time to setup our link again. Type the command below to setup the VPN link. Be sure to note that the command is the same for each server but the key is appended to the command. Note that since we are using super user mode some directives will be different.

ServerA Commands

To create the link on ServerA run the following command:
# /usr/sbin/openvpn --remote 10.100.1.50 --dev tun1 --ifconfig 172.16.1.1 172.16.1.2 --secret key
This command will create a VPN link with ServerB (10.100.1.50). It will also prepare a virtual interface called tun1 and will assign the IP 172.16.1.1 to it. The associated routes for this will be created as well.

ServerB Commands

Before we can enter the command we will need to go to the correct directory and enter super user mode as well as create the link. Run the following commands:
# su
# cd /usr/share/doc/openvpn-2.0.9
# /usr/sbin/openvpn --remote 10.100.1.20 --dev tun1 --ifconfig 172.16.1.2 172.16.1.1 --secret key
This command will create a VPN link with ServerA (10.100.1.20). It will also prepare a virtual interface called tun1 and will assign the IP 172.16.1.2 to it. The associated routes for this will be created as well.

Test the VPN link

As with the previous setup go ahead and test the link by pinging each side of your VPN tunnel.

TLS-enabled VPN

Now that we have a functioning VPN connection and have proved that we can use 128-bit static keys it is now time to beef up our security a bit. The following steps will walk you through setting up TLS-based security with regenerative security on timed intervals. This process will involved creating server and client certificates along with a certificate authority to authenticate those certificates.
We will go ahead and tear down the existing VPN tunnel that we setup by pressing Control-C on each server. You should be returned back to the command prompt. You'll notice that we are still logged in as the super user -- this is okay.

Easy-RSA

To create our keys and certificates we will use three programs (build-ca, build-key, and build-key-server) that ship with OpenVPN. Follow the steps below to create the necessary items.

Setup

First we need to perform some additional setup functions for Easy-RSA on ServerA. The first thing we need to do is make sure that we are in the correct directory:
# cd /etc/openvpn
You'll notice that there is nothing in this directory if you 'ls' it. To prepare all of the files run the following commands below:
# mkdir easy-rsa
# cp -R /usr/share/doc/openvpn-2.0.9/easy-rsa/2.0/* easy-rsa/
# chmod -R 777 easy-rsa/
# cd easy-rsa/
Now we need to setup the correct environment variables. Run the following command: (Note the double periods)
# . ./vars
Now clean up everything:
# ./clean-all

Create Certificate Authority (CA)

We are now ready to build are certificate authority (CA). To build this simply run the following command:
# ./build-ca
You will be asked a series of questions. You may choose to answer none or all of them. Keep in mind that these will show up on your certificate if it is inquired upon. The values we are using in our example are being shown:
  • Country Name: US
  • State or province: TX
  • Locality Name: San Antonio
  • Organization Name: Rackspace
  • Organizational Unit Name:
  • Common Name: OpenVPN-CA (you can choose what you'd like here)
  • Email Address: support@rackspacecloud.com
Once you complete these items you will be taken back to your command prompt. Your ca.key and ca.crt files will be stored in the keys directory.

Create Server Certificate

Now we are ready to generate the certificate for the server's side of the VPN tunnel. Run the following command:
# ./build-key-server ServerA
You'll note that we used the server name of ServerA for the key. This will help us better identify that this is for ServerA which is the master VPN server.
You will be asked the same questions again and a few additional questions. The answers we have used for our demonstration are listed here:
  • Country Name: US
  • State or province: TX
  • Locality Name: San Antonio
  • Organization Name: Rackspace
  • Organizational Unit Name:
  • Common Name: ServerA (note that we used our server name)
  • Email Address: support@rackspacecloud.com
  • A challenge password:
  • An optional company name:
  • Sign the certificate: Y
  • Commit: Y
You will notice the two review questions at the end... simply press Y to those questions.

Create Client Certificate

Now we are ready to build the client certificate for ServerB. Run the following command:
# ./build-key ServerB
Notice that we used ServerB for the certificate name. You will be presented with the same questions as above with the client certificate. The only difference is that we will use ServerB for the Common Name.
Once the certificate has been saved you will see them in /etc/openvpn/easy-rsa/keys.

Create Diffie Hellman Keys

The final step in creating your TLS keys is producing the Diffie Hellman, or DH, keys. Run the following command to produce them:
# ./build-dh
You will see a series of characters run across the screen. This process may take up to 30 seconds or more to complete. Upon completion you will be returned to the command prompt.

Copy Keys

Now that we have the keys and certificates created it is time to put them in an appropriate spot.

Server A (TLS Server)

We will be storing our keys in /etc/openvpn/keys on both servers; however, we would like to keep them in the original directory on ServerA for regeneration purposes. To do this we will create a symbolic link:
# ln -s /etc/openvpn/easy-rsa/keys /etc/openvpn/keys

Server B (TLS Client)

ServerB doesn't currently have any of the keys installed so we will need to copy the keys from ServerA to ServerB. Run the following commands on ServerA to copy them over:
# scp -r /etc/openvpn/keys root@10.100.1.50:/etc/openvpn/keys
Now we have the keys you need on each server. However we need to remove some files from ServerB as they really shouldn't be there. To fix this we'll log into ServerB through SSH and run the following commands:
# rm -f /etc/openvpn/keys/*.pem
# rm -f /etc/openvpn/keys/ServerA*
# rm -f /etc/openvpn/keys/index*
# rm -f /etc/openvpn/keys/serial*
You should be left with five (5) files remaining.

Create VPN Link

Now we are ready to establish our TLS-enabled VPN link between ServerA and ServerB.

Server A (TLS Server)

Run the following command in super user mode to establish the VPN tunnel:
# /usr/sbin/openvpn --remote 10.100.1.50 --dev tun1 --ifconfig 172.16.1.1 172.16.1.2 --tls-server 
     --dh /etc/openvpn/keys/dh1024.pem --ca /etc/openvpn/keys/ca.crt 
     --cert /etc/openvpn/keys/ServerA.crt --key /etc/openvpn/keys/ServerA.key 
     --reneg-sec 60 --verb 5

Server B (TLS Client)

Run the following command in super user mode to establish the VPN tunnel:
# /usr/sbin/openvpn --remote 10.100.1.20 --dev tun1 --ifconfig 172.16.1.2 172.16.1.1 --tls-client 
     --ca /etc/openvpn/keys/ca.crt --cert /etc/openvpn/keys/ServerB.crt --key /etc/openvpn/keys/ServerB.key 
     --reneg-sec 60 --verb 5

Once you run the appropriate line on each server you will see a page or two of text scroll across the terminal. There are a few lines that we need to pay attention to:
Wed Aug  5 23:11:18 2009 us=378185 Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 1024 bit RSA
This line above means that we are now using TLSv1 to encrypt our data channel. Great!


Wed Aug  5 23:11:18 2009 us=378185 [ServerA] Peer Connection Initiated with 10.1001.1.20:1194
This line above means that our VPN tunnel has been established.

Test Your Link

With the VPN tunnel established you may open up new SSH connections to your server and perform connection tests using the 172.16.1.1 and 172.16.1.2 IP addresses. All traffic using these addresses will flow over the VPN tunnel. Once you are done testing you may bring down the tunnel by pressing Control-C on each server.

Logging

One thing to note is that on each of the openvpn commands we executed we used the command line argument --verb 5. This will raise the verbosity level of the application, in other words, the application logs more information. You will see quite a bit of information with this level including read and write activities across the VPN, key generation, and more. If you would like to turn off verbosity simply leave the --verb 5 off the command.

Startup Script

We've tested and proved that our VPN tunnel is working but setting up the tunnel manually is simply not effective. To accomplish this we will need to create a file in /etc/openvpn/ that our startup script will load. The server and client configuration files will be different so please be sure to use the correct configuration.

Server Script

To create a configuration for the file you will need to open your favorite text editor. For our example we will use nano:
# nano -w /etc/openvpn/server.conf
For the sake of simplicity we are only going to give the startup script for a TLS-enabled VPN tunnel. This particular tunnel configuration will allow you to setup a 1024-bit encrypted VPN tunnel and the VPN network will be 172.16.1.0/24.
local 10.1.100.20                    # Replace with your internal (eth1) IP
port 1194
proto udp
dev tun
ca /etc/openvpn/keys/ca.crt
cert /etc/openvpn/keys/ServerA.crt   # Replace with the key/certificate pair you created
key /etc/openvpn/keys/ServerA.key    # Replace with the key/certificate pair you created
dh /etc/openvpn/keys/dh1024.pem
server 172.16.1.1 255.255.255.0      # This is the network range that your server will give out.  These MUST be non-routeable.
ifconfig-pool-persist ipp.txt
keepalive 10 120
comp-lzo
user nobody
group nobody
status openvpn-status.log
verb 3
client-to-client
After you have entered this information into the text editor simply press Control-X to exit, then Y to save followed by the Enter key.

Client Script

To create a configuration for the file you will need to open your favorite text editor. For our example we will use nano:
# nano -w /etc/openvpn/client.conf
For the sake of simplicity we are only going to give the startup script for a TLS-enabled VPN tunnel. This particular tunnel configuration will allow you to setup a 1024-bit encrypted VPN tunnel that will pull an IP address from the VPN pool.
client
dev tun
local 10.1.100.50                   # Replace with your internal (eth1) IP
port 1194
proto udp
remote 10.1.100.20 1194             # Replace with your VPN server's IP
nobind
persist-key
persist-tun
ca /etc/openvpn/keys/ca.crt
cert /etc/openvpn/keys/ServerB.crt  # Replace with the key/certificate pair you created
key /etc/openvpn/keys/ServerB.key   # Replace with the key/certificate pair you created
comp-lzo
verb 3
After you have entered this information into the text editor simply press Control-X to exit, then Y to save followed by the Enter key.

Final Steps

Once you have the files saved it is time to enable the OpenVPN service. Type the following commands on the client and server:
# /sbin/chkconfig openvpn on
You may also start the service by typing either of the following commands on the server and client:
# /etc/init.d/openvpn start

-OR-

# service openvpn start
You can verify that the service is running by typing:
# service openvpn status
You can see the interface information by typing the following:
# /sbin/ifconfig tun0

Summary

Hopefully this has given some insight on how to setup VPN tunneling with OpenVPN. These examples are just skimming the surface of the types of VPN configurations that are possible with OpenVPN.
--Kelly Koehn 00:02, 6 August 2009 (CDT)

REFERENCES

http://cloudservers.rackspacecloud.com/index.php/CentOS_-_VPN_tunneling_with_OpenVPN

HOWTO Disable printing in Samba

SkyHi @ Thursday, September 02, 2010
Samba has printing support enabled by default. To disable printing support, use the following configuration settings in /etc/samba/smb.conf:
load printers = no
 printing = bsd
 printcap name = /dev/null
 disable spoolss = yes
Now, restart Samba. No more printing support

REFERENCES
http://consultancy.edvoncken.net/index.php/HOWTO_Disable_printing_in_Samba

HOWTO Design a fault-tolerant DHCP + DNS solution

SkyHi @ Thursday, September 02, 2010
In this article, we will describe a design for a fault-tolerant (redundant) DHCP + DNS solution on Linux.
Design criteria:
  • Failure of one DHCP server should not prevent Clients from obtaining a valid IP address.
  • Failure of one DNS server should not prevent Clients from executing DNS queries.
  • The design should allow for Dynamic DNS updates.

Contents

[hide]

[edit] Design Overview

In the image below, you can see the design involving DNS Master / Slave as well as DHCP Primary / Secondary servers.
The sequence of events is as follows:
  1. The Client initiates a DORA (Discover, Offer, Request, Accept) communications sequence with the DHCP servers.
  2. Depending on the Client MAC address, one of the DHCP servers will respond with a DHCP_OFFER.
  3. The Client obtains an IP address as well as additional network settings.
  4. The DHCP server communicates the new lease to its partner.
  5. The DHCP server sends a DNS update to the DNS Master server.
  6. The DNS Slave server(s) are kept in sync using DNS Zone Transfers.

[edit] Fault-Tolerant DHCP Service

In this design, we will use the ISC DHCP daemon. One of the features includes a DHCP Primary / Secondary failover configuration, consisting of exactly 2 DHCP servers.
The DHCP servers share a pool of IP addresses, and keep the lease database for this pool in sync between them. If one DHCP server fails, the remaining server will continue to issue IP addresses. This guarantees uninterrupted service to the Clients.

[edit] Fault-Tolerant DNS Service

In this design, we will use the ISC DNS daemon. There are two ways to achieve fault tolerance:
  1. Master / Slave configuration
  2. Multi-Master configuration

[edit] DNS Master / Slave

DNS Master / Slave configuration is fairly straightforward. All zone data is kept on the DNS Master. The Master is configured to allow zone transfers from the DNS Slaves. Each DNS Slave performs zone transfers to obtain the most recent DNS information from the DNS Master.
Clients obtain a list of DNS servers through DHCP. If a DNS server fails, the Client will attempt to contact one of the remaining DNS servers. This guarantees uninterrupted DNS resolution service to the clients.
If the DNS Master fails, the situation becomes more severe. The DHCP servers communicate their updates only to the DNS Master server. In case of an outage, Dynamic DNS updates could be lost. One possible solution is the use of a DNS Multi-Master configuration.

[edit] DNS Multi-Master

DNS Multi-Master configuration is more complicated to configure and maintain than a Master / Slave configuration.
In case of a DNS server failure, both DNS Query and DNS Update services remain operational.

[edit] Design Decision

For our design, it is sufficient if DNS Queries from Clients remain operational. We will choose the DNS Master / Slave approach here as it is less complex, and still satisfies the design criteria.
how

[edit] Operational Issues

[edit] Managing combined Static and Dynamic DNS

Dynamic DNS is an all-or-nothing feature. If DDNS is enabled for a specific DNS zone, it should no longer be edited manually - your changes may be corrupted or overwritten. This often conflicts with existing systems management practices.
There are two approaches to avoid potential DNS corruption issues when managing a combined Static / Dynamic DNS environment:
Create a separate DNS zone for Dynamic DNS records
Static DNS records can still be managed in their own zone (example.local) as usual. Dynamic DNS records will be managed automatically in a separate DNS (sub-)domain, like "ddns.example.local".
Keep all DNS records in one Dynamic zone
All entries, both static and dynamic, are in the same DNS zone (example.local). Static entries should be managed using the "nsupdate" utility - this means that system management practices and tooling may need to be changed.
In this design, we will use a single Dynamic zone. Static entries will be managed using the "nsupdate" utility.




HOWTO Configure DHCP failover

HOWTO Configure DHCP failover

From Consultancy.EdVoncken.NET

Jump to: navigation, search
The ISC DHCP server currently supports failover using a maximum of 2 servers: primary and secondary. This is an active/active setup; a simple form of load balancing is used to spread the load across both servers.
In this example, we'll be setting up failover DHCP on two servers, 192.168.123.1 and 192.168.123.2. These servers also run DNS and NTP. Dynamic clients will get an address in the range 192.168.123.100-199. Static leases are defined for several networked devices.
Since the ISC DHCP server allows the use of "include-files" in the configuration, we will use them to help keep the configurations simple and in sync across servers.

Contents

[hide]

[edit] Installation

Install the following package, for example using yum:
dhcp
This example is based on version dhcp-3.0.5-18.el5.

[edit] Configuration

The configuration consists of several sections, each stored in a separate file to make maintenance easier.

[edit] Failover parameters

For the Primary, define the following failover parameters in /etc/dhcpd.conf_primary:
##########################
 # DHCP Failover, Primary #
 ##########################
 
 failover peer "example" {                   # Failover configuration
        primary;                             # I am the primary
        address 192.168.123.1;               # My IP address
        port 647;
        peer address 192.168.123.2;          # Peer's IP address
        peer port 647;
        max-response-delay 60;
        max-unacked-updates 10;
        mclt 3600;
        split 128;                           # Leave this at 128, only defined on Primary
        load balance max seconds 3;
 }
For the Secondary, define the following failover parameters in /etc/dhcpd.conf_secondary:
############################
 # DHCP Failover, Secondary #
 ############################
 
 failover peer "example" {                   # Fail over configuration
        secondary;                           # I am the secondary
        address 192.168.123.2;               # My ip address
        port 647;
        peer address 192.168.123.1;          # Peer's ip address
        peer port 647;
        max-response-delay 60;
        max-unacked-updates 10;
        mclt 3600;
        load balance max seconds 3;
 }

[edit] Subnet declaration

Write a subnet declaration using our failover pool in /etc/dhcpd.conf_subnet. This section is identical on Primary and Secondary:
subnet 192.168.123.0 netmask 255.255.255.0  # zone to issue addresses from
 {
       pool {
               failover peer "example";      # Pool for dhcp leases with failover bootp not allowed
               deny dynamic bootp clients;
               range 192.168.123.100 192.168.123.190;
       }
       pool {                                # Accomodate our bootp clients here; no replication and failover
               range 192.168.123.191 192.168.123.199;
       }
       allow unknown-clients;
 
       authoritative;
 
       option routers             192.168.123.254;
       option subnet-mask         255.255.255.0;
       option broadcast-address   192.168.123.255;
       option domain-name         "example.local.";
       option domain-name-servers 192.168.123.1, 192.168.123.2;
       option ntp-servers         192.168.123.1, 192.168.123.2;
       option netbios-node-type   8;
 
       default-lease-time         300;
       max-lease-time             600;
 
       filename                   "/pxelinux.0";
       next-server                192.168.123.1;
 }
Note: the manpage for dhcpd.conf(5) states that dynamic BOOTP leases are not compatible with failover.
Therefore, BOOTP should be disabled in in pools using failover.

[edit] Dynamic DNS

If you are configuring Dynamic DNS, write the settings in /etc/dhcpd.conf_subnet. This section is identical on Primary and Secondary:
ddns-update-style interim;
 ddns-updates on;
 ddns-domainname "example.local."; 
 ignore client-updates;
 
 # Forward zone for DNS updates
 zone example.local
 {
       primary 192.168.123.1;                # update the primary DNS
       key ddns-update;                      # key to use for the update
 }
 
 # Reverse zone for DNS updates
 zone 123.168.192.in-addr.arpa
 {
       primary 192.168.123.1;                # update the primary DNS
       key ddns-update;                      # key for update
 }
Note: for security reasons, DNS updates need to be "signed" using a public/private key mechanism.
The "key ddns-update" statement specifies that DHCP will use a key named "ddns-update" during update requests.
For more information on this key, please refer to HOWTO Configure Dynamic DNS.

[edit] Static leases

For more flexible IP address management, configure all devices to use DHCP and set up static leases for these devices.
In /etc/dhcpd.conf_static, create all static leases that you may need (outside of the DHCP/BOOTP range!). Again, this section is identical on Primary and Secondary:
# Axis Security Camera
 host cam-reception {
       hardware ethernet 00:40:12:c0:ff:ee;
       fixed-address 192.168.123.200;
 }
 
 # Axis Security Camera
 host cam-fireexit {
       hardware ethernet 00:40:fe:ed:fa:ce;
       fixed-address 192.168.123.201;
 }
 
 # Axis Security Camera
 host cam-frontdoor {
       hardware ethernet 00:40:de:ad:be:ef;
       fixed-address 192.168.123.202;
 }

[edit] Overall configuration

The configuration of the Primary and Secondary DHCP servers is mostly identical, except for the Failover parameters. By keeping the sub-configurations in sync across servers (perhaps using rsync), maintenance is reduced to a minimum.
The overall configuration file, /etc/dhcpd.conf, is only slightly different on Primary and Secondary.

[edit] Configuring /etc/dhcpd.conf on the Primary

# DHCP Server - Configuration file for Primary
 #
 # File $Id: dhcpd.conf,v 1.21 2009/07/09 16:26:57 root Exp root $
 
 # Global configuration
 set vendorclass = option vendor-class-identifier;
 
 # Dynamic DNS Updates
 include "/etc/ddns-update.dnskey";
 include "/etc/dhcpd.conf_ddns";
 
 # DHCP Failover, Primary
 include "/etc/dhcpd.conf_primary";
 
 # Subnet declaration
 include "/etc/dhcpd.conf_subnet";
 
 # Static IP addresses
 include "/etc/dhcpd.conf_static";
 
 # EOF

[edit] Configuring /etc/dhcpd.conf on the Secondary

# DHCP Server - Configuration file for Secondary
 #
 # File $Id: dhcpd.conf,v 1.9 2009/07/09 16:31:20 root Exp root $
 
 # Global configuration
 set vendorclass = option vendor-class-identifier;
 
 # Dynamic DNS Updates
 include "/etc/ddns-update.dnskey";
 include "/etc/dhcpd.conf_ddns";
 
 # DHCP Failover, Secondary
 include "/etc/dhcpd.conf_secondary";
 
 # Subnet declaration
 include "/etc/dhcpd.conf_subnet";
 
 # Static IP addresses
 include "/etc/dhcpd.conf_static";
 
 # EOF

[edit] Miscellaneous

[edit] SElinux considerations

By default, SELinux policy does not allow the BIND daemon (named) to write to files labeled with the name_zone_t type, which is used for master zone files. The zone files should be stored under /var/named/chroot/var/named/data or /var/named/chroot/var/named/dynamic.
# restorecon -R -v /var/named/chroot/var/named/data
 # restorecon -R -v /var/named/chroot/var/named/dynamic
This will reset the zone files to the named_cache_t type, hopefully solving the "SELinux is preventing named (named_t) "unlink"" error messages.

[edit] Firewall settings

Your firewall should allow inbound traffic on 69/UDP, 69/TCP and 647/TCP. Sample entries for /etc/sysconfig/iptables:
# DHCP server
 -A INPUT -p udp -m udp --dport 69 -j ACCEPT
 -A INPUT -p tcp -m tcp --dport 69 -j ACCEPT
 -A INPUT -m state --state NEW -m tcp -p tcp --dport 647 -j ACCEPT

[edit] Starting the service

On both DHCP Primary and Secondary, run the following commands as root:
# chkconfig dhcpd on
 # service dhcpd start

[edit] References







HOWTO Configure Dynamic DNS

HOWTO Configure Dynamic DNS

From Consultancy.EdVoncken.NET

Jump to: navigation, search
In this example, we will set up a DNS Master and DNS Slave server, on 192.168.123.1 and 192.168.123.2 respectively.
The configuration will also allow for Dynamic DNS updates from our DHCP servers.

Contents

[hide]

[edit] Installation

Install the following packages, for example using yum:
bind
 bind-chroot
 bind-utils
This example is based on bind-9.3.4-10.P1.el5_3.1.

[edit] Configuration

The configuration and data files for the chroot()-ed BIND DNS server can be found under /var/named/chroot/. Under /etc, you will find a symlink pointing to /var/named/chroot/etc/named.conf.

[edit] DNS Keys

For Dynamic DNS to work, the updates need to be "signed" using a transaction key. Since this is a symmetric key, it has to be shared between DNS and DHCP. It must be protected to prevent unauthorized changes being made to your DNS zones. The key has to be available on both DHCP servers. Generate the key as follows:
# cd /tmp
 # dnssec-keygen -a HMAC-MD5 -b 512 -n HOST ddns-update
These commands generate a set of .key and .private files in the current working directory. Move these files to a better name and location:
# mv Kddns-update.*.key /etc/ddns-update.key
 # cat /etc/ddns-update.key 
 ddns-update. IN KEY 512 3 157 K3EaOD3IysiC/D7lIXp+4hrYGDLyIq6la[...]9oE4kZ3O1ZFxKSMHfwG5YvUkYE7gxMHCmCg==
 
 # mv Kddns-update.*.private /etc/ddns-update.private
 # cat /etc/ddns-update.private 
 Private-key-format: v1.2
 Algorithm: 157 (HMAC_MD5)
 Key: K3EaOD3IysiC/D7lIXp+4hrYGDLyIq6la[...]9oE4kZ3O1ZFxKSMHfwG5YvUkYE7gxMHCmCg==
Note that the actual private and public keys are identical for HMAC-MD5. This is normal. The .key and .private files are needed by the nsupdate utility, later on.
We now need create a configuration file in a different format, for use by the DHCP and DNS servers- we will call this file /etc/ddns-update.dnskey. The syntax is identical to the /etc/rndc.key file. We need to set the key name and the key value properly:
# cat /etc/ddns-update.dnskey 
 key "ddns-update" {
       algorithm hmac-md5;
       secret "K3EaOD3IysiC/D7lIXp+4hrYGDLyIq6la[...]9oE4kZ3O1ZFxKSMHfwG5YvUkYE7gxMHCmCg==";
 };
Make sure it has the proper ownership and permissions:
# ls -l /etc/ddns-update.dnskey
 -rw-r----- 1 root named 145 Jul  9 12:25 ddns-update.dnskey
On the Primary DHCP / Master DNS server, the key needs to be available both as /etc/ddns-update.dnskey (for DHCP) and /var/named/chroot/etc/ddns-update.dnskey (for DNS). Creating a symlink will not work due to the SElinux policy; you will have to copy the file instead, so each copy has its own SElinux context:
# cp /etc/ddns-update.dnskey /var/named/chroot/etc/
 # ls -lZ /etc/ddns-update.dnskey /var/named/chroot/etc/ddns-update.dnskey 
 -rw-r-----  root root  root:object_r:etc_t              /etc/ddns-update.dnskey
 -rw-r-----  root named root:object_r:named_conf_t       /var/named/chroot/etc/ddns-update.dnskey

[edit] DNS Master configuration

On the Master, we will define all zones that we are authoritative for. We will also allow DNS updates to these zones from our DHCP servers.
# ISC BIND Configuration File
 #
 # Purpose:
 #   Configure BIND as caching/forwarding nameserver with authority
 #   for local networks as well as support for Dynamic DNS Updates
 #
 # File $Id: named.conf,v 1.4 2009/07/07 12:59:12 root Exp root $
 
 options {
       directory "/etc";
       pid-file "/var/run/named/named.pid";
       forwarders {
               // Put your ISP's DNS servers here
               66.159.123.200;
               66.159.123.201;
       };
       allow-query { localhost; localnets; };
 };
 
 # Key used by DHCP servers for Dynamic DNS Updates
 include "/etc/ddns-update.dnskey";
 
 zone "example.local" {
       type master;
       file "/var/named/data/example.local.zone";
       allow-transfer { 192.168.123.2; };
       allow-update { key "ddns-update"; };
 };
 
 zone "123.168.192.in-addr.arpa" {
       type master;
       file "/var/named/data/192.168.123.zone";
       allow-transfer { 192.168.123.2; };
       allow-update { key "ddns-update"; };
 };
 
 # EOF
SELinux Note: On the DNS Master, use the "data" sub-directory to store zone files.
Otherwise, you will see errors while trying to create journal files on the Master.

[edit] DNS Slave configuration

You can have multiple DNS Slave servers. Each will perform a zone transfer regularly, keeping the data in sync.
Dynamic DNS updates, originating from our DHCP servers are sent to the DNS Master only.
# ISC BIND Configuration File
 #
 # Purpose:
 #   Configure BIND as caching/forwarding slave nameserver
 #
 # File $Id: named.conf,v 1.4 2009/07/08 02:02:19 root Exp $
 
 options {
       directory "/etc";
       pid-file "/var/run/named/named.pid";
       forwarders {
               // Put your ISP's DNS servers here
               66.159.123.200;
               66.159.123.201;
       };
       allow-query { localhost; localnets; };
       allow-notify { 192.168.123.2; };
 };
 
 # Dynamic DNS Updates are only sent to the Primary DNS
 
 zone "example.com" {
       type slave;
       masters { 192.168.123.1; };
       file "/var/named/slaves/example.com.zone";
 };
 
 zone "123.168.192.in-addr.arpa" {
       type slave;
       masters { 192.168.123.1; };
       file "/var/named/slaves/192.168.123.zone";
 };
The "allow-notify" option prevents BIND from generating error messages as it apparently tries to notify itself of updates. Go figure ;-)
SELinux Note: On the DNS Slave, use the "slaves" sub-directory to store data from the DNS Master.
Otherwise, you will get a "permission denied" error on the Slave while trying to transfer the zones from the Master.

[edit] DNS Zone files

On the DNS Master, we create a minimal set of zone files (forward and reverse zones). Entries will be managed either by DHCP or nsupdate.
/var/named/data/example.local.zone:
 ; DO NOT EDIT MANUALLY - use the "nsupdate" utility to prevent data loss
 ;
 $ORIGIN example.local.
 $TTL 86400 ; 1 day
 @  IN SOA ns1.example.local. hostmaster.example.local. (
     2009074711 ; serial
     7200       ; refresh (2 hours)
     300        ; retry (5 minutes)
     604800     ; expire (1 week)
     60         ; minimum (1 minute)
     )
   IN NS ns1.example.local.
   IN NS ns2.example.local.
 ns1  IN A 192.168.123.1
 ns2  IN A 192.168.123.2
/var/named/data/192.168.123.zone:
 ; DO NOT EDIT MANUALLY - use the "nsupdate" utility to prevent data loss
 ;
 $ORIGIN 123.168.192.in-addr.arpa.
 $TTL 86400 ; 1 day
 @  IN SOA ns1.example.local. hostmaster.example.local. (
     2009074711 ; serial
     7200       ; refresh (2 hours)
     300        ; retry (5 minutes)
     604800     ; expire (1 week)
     60         ; minimum (1 minute)
     )
   IN NS ns1.example.local.
   IN NS ns2.example.local.
 1  IN PTR ns1.example.local.
 2  IN PTR ns2.example.local.

[edit] Miscellaneous

[edit] Client configuration

On RHEL/CentOS/Fedora clients, you should edit /etc/sysconfig/network-scripts/ifcfg-eth0 and set the DHCP_HOSTNAME variable to the short hostname of your machine. The client will now send its hostname to the DHCP server during IP address negotiation. The DHCP_HOSTNAME is used for updating Dynamic DNS. Sample:
# Sample Network Device
 DEVICE=eth0
 HWADDR=00:16:de:ad:be:ef
 ONBOOT=yes
 BOOTPROTO=dhcp
 DHCP_HOSTNAME=demo01

[edit] Using nsupdate to add or remove DNS entries

[edit] Adding a host (A and PTR records)

# nsupdate -k /etc/ddns-update.key
 > update add gateway.example.local 38400 A 192.168.123.254
 > 
 > update add 254.123.168.192.in-addr.arpa. 38400 PTR gateway.example.local.
 >
 > quit
Note: The empty line is necessary, it sends the update to DNS. Since we are adding records to two different zones, we need to send two separate updates.

[edit] Deleting a host (A and PTR records)

# nsupdate -k /etc/ddns-update.key 
 > update delete gateway.example.local IN A 192.168.123.254
 > 
 > update delete 254.123.168.192.in-addr.arpa PTR gateway.example.local.
 > 
 > quit

[edit] Adding a mail-host (MX records)

The domain "example.local" wishes to use "mail.example.local" as their primary mail host.
We first need to add the standard A and PTR records for the mailhost (TTL 86400 seconds), followed by the MX record for the domain:
# nsupdate -k /etc/ddns-update.key 
 > update add mail.example.nl 86400 IN A 192.168.123.25
 > 
 > update add 25.123.168.192.in-addr.arpa. 86400 PTR mail.example.local.
 > 
 > update add example.local 86400 MX 10 mail.example.local.
 > 
 > quit
Note: The mailhost should of course be accessible from the Internet and use a routable IP address instead of an RFC1918 address.
Verify the results using 'dig':
# dig example.local MX
 
 ; <<>> DiG 9.3.4-P1 <<>> example.local MX
 ;; global options:  printcmd
 ;; Got answer:
 ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 15733
 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 3
 
 ;; QUESTION SECTION:
 ;example.local.   IN MX
 
 ;; ANSWER SECTION:
 example.local.  86400 IN MX 10 mail.example.local.
 
 ;; AUTHORITY SECTION:
 example.local.  86400 IN NS ns2.example.local.
 example.local.  86400 IN NS ns1.example.local.
 
 ;; ADDITIONAL SECTION:
 mail.example.local. 86400 IN A 192.168.123.25
 ns1.example.local. 86400 IN A 192.168.123.1
 ns2.example.local. 86400 IN A 192.168.123.2
 
 ;; Query time: 1 msec
 ;; SERVER: 127.0.0.1#53(127.0.0.1)
 ;; WHEN: Fri Jul 31 11:34:29 2009
 ;; MSG SIZE  rcvd: 134

[edit] Deleting a mail-host (MX records)

If we wish to remove the mail-host, just delete the MX, A and PTR records:
# nsupdate -k /etc/ddns-update.key 
 > update delete example.local MX 10 mail.example.local.
 > 
 > update delete mail.example.local IN A 192.168.123.25
 > 
 > update delete 25.123.168.192.in-addr.arpa PTR mail.example.local.
 > 
 > quit
Note: Mail may continue to be delivered to the old mailhost until the TTL expires!

[edit] Debugging

During development, you may want to enable some extra logging in /etc/named.conf:
logging {
       channel update_debug {
               file "/var/named/data/named-update.log";
               severity  debug 3;
               print-category yes;
               print-severity yes;
               print-time     yes;
       };
 
       channel security_info    {
               file "/var/named/data/named-auth.log";
               severity  debug 3;
               print-category yes;
               print-severity yes;
               print-time     yes;
       };
 
       category update { update_debug; };
       category security { security_info; };
 };

[edit] Starting the service

On both Master and Slave DNS, start the BIND nameserver:
# chkconfig named on
 # service named start
HOWTO Manage Dynamic DNS with nsupdate

HOWTO Manage Dynamic DNS with nsupdate

From Consultancy.EdVoncken.NET

Jump to: navigation, search

Contents

[hide]

[edit] A and PTR records

[edit] Adding a host (A and PTR records)

# nsupdate -k /etc/ddns-update.key > update add gateway.example.local 38400 A 192.168.123.254 > > update add 254.123.168.192.in-addr.arpa. 38400 PTR gateway.example.local. > > quit Note: The empty line is necessary, it sends the update to DNS. Since we are adding records to two different zones, we need to send two separate updates.

[edit] Deleting a host (A and PTR records)

# nsupdate -k /etc/ddns-update.key > update delete gateway.example.local IN A 192.168.123.254 > > update delete 254.123.168.192.in-addr.arpa PTR gateway.example.local. > > quit

[edit] MX records

[edit] Adding a mail-host

The domain "example.local" wishes to use "mail.example.local" as their primary mail host.
We first need to add the standard A and PTR records for the mailhost (TTL 86400 seconds), followed by the MX record for the domain:
# nsupdate -k /etc/ddns-update.key > update add mail.example.nl 86400 IN A 192.168.123.25 > > update add 25.123.168.192.in-addr.arpa. 86400 PTR mail.example.local. > > update add example.local 86400 MX 10 mail.example.local. > > quit Note: The mailhost should of course be accessible from the Internet and use a routable IP address instead of an RFC1918 address.
Verify the results using 'dig':
# dig example.local MX  ; <<>> DiG 9.3.4-P1 <<>> example.local MX  ;; global options: printcmd  ;; Got answer:  ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 15733  ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 3  ;; QUESTION SECTION:  ;example.local. IN MX  ;; ANSWER SECTION: example.local. 86400 IN MX 10 mail.example.local.  ;; AUTHORITY SECTION: example.local. 86400 IN NS ns2.example.local. example.local. 86400 IN NS ns1.example.local.  ;; ADDITIONAL SECTION: mail.example.local. 86400 IN A 192.168.123.25 ns1.example.local. 86400 IN A 192.168.123.1 ns2.example.local. 86400 IN A 192.168.123.2  ;; Query time: 1 msec  ;; SERVER: 127.0.0.1#53(127.0.0.1)  ;; WHEN: Fri Jul 31 11:34:29 2009  ;; MSG SIZE rcvd: 134

[edit] Deleting a mail-host

If we wish to remove the mail-host, just delete the MX, A and PTR records:
# nsupdate -k /etc/ddns-update.key > update delete example.local MX 10 mail.example.local. > > update delete mail.example.local IN A 192.168.123.25 > > update delete 25.123.168.192.in-addr.arpa PTR mail.example.local. > > quit Note: Mail may continue to be delivered to the old mailhost until the TTL expires!

[edit] Service (SRV) records

[edit] Adding SRV records for your IPA Server

After installing the IPA Server ("apollo" in this example), you should add some service-records to DNS for IPA discovery. The installer leaves a sample DNS zone file in /tmp. This is how I added the relevant records using nsupdate:
# nsupdate -k /etc/ddns-update.key > update add _ldap._tcp.example.local. 86400 IN SRV 0 100 389 apollo > > update add _kerberos._tcp.example.local. 86400 IN SRV 0 100 88 apollo > > update add _kerberos._udp.example.local. 86400 IN SRV 0 100 88 apollo > > update add _kerberos-master._tcp.example.local. 86400 IN SRV 0 100 88 apollo > > update add _kerberos-master._udp.example.local. 86400 IN SRV 0 100 88 apollo > > update add _kpasswd._tcp.example.local. 86400 IN SRV 0 100 464 apollo > > update add _kpasswd._udp.example.local. 86400 IN SRV 0 100 464 apollo > > quit

[edit] Navigation

[edit] References