Friday, May 20, 2011

removing the password on an apache ssl certificate

SkyHi @ Friday, May 20, 2011
Every once in a while I run across an SSL Cert with an included password. Although the security is great automating an environment or an Apache restart with required interaction is problematic.
Here is an example of the interaction with a password included SSL Cert:

[root@w2 conf.d]# /etc/init.d/httpd restart
Stopping httpd:                                            [  OK  ]
Starting httpd: Apache/2.2.8 mod_ssl/2.2.8 (Pass Phrase Dialog)
Some of your private key files are encrypted for security reasons.
In order to read them you have to provide the pass phrases.
Server (RSA)
Enter pass phrase:
OK: Pass Phrase Dialog successful.
And here is how you remove the password:
[root@w2 conf]# openssl rsa -in -out

Ubuntu Titlebar is missing

SkyHi @ Friday, May 20, 2011
Try pressing Alt+F2 then put in gtk-window-decorator --replace

Converting videos in Ubuntu using FFMPEG

SkyHi @ Friday, May 20, 2011
you are a multimedia junkie and felt it your karma to convert tons of videos and music to popular formats, then FFMPEG is the right tool for you. FFmpeg is a free video converter and so much more. It can be found in the default Ubuntu repository and also comes pre-installed in most other distro's. It's an open source project that contains an infinite number of libraries, the most noticeable among them is the libavcodec (for encoding and decoding of the audio and video data ) and libavformat( mux demux library).

Thats not all. It can also be used for more advanced functions like slowing down the frame rate, resizing the video and many more. Good news for geeks. FFmpeg is CLI based. So you don't need to struggle with those unbearable mouse clicks. The latest stable release version of FFmpeg is 0.6.
Converting a video using the command line.

Warning. Now if the word "command line" gives you a heart attack then you should probably skip this section and jump straight to the bottom of the page.
FFmpeg is mainly used for converting videos from one format to another. Duh!!. The most common syntax of using FFmpeg consists of the input of the file followed by the output file in the desired format.

ffmpeg -i inputvideo outputvideo.
Sounds so simple right? Well there is a glitch. The quality of the output (bit rates, frame rate) gets degraded when we use this format.
To avoid this problem we will use a number of options along with the command . Suppose we want to convert a .avi file to .flv video. Use the following command.
ffmpeg -i input.avi -ab 56 -ar 44100 -b 200 -r 15 -s 320x240 -f flv final.flv 
After the execution processing takes place and you should see a screenshot like the one below:
convert video ubuntu 1.png

No doubt, the command is very long but it is also very easy to understand. Now let's get to know those cryptic options-
  • -i = input the file
  • -ab = set the audio bitrate
  • -ar = setting the audio frequency
  • -b = video bitrate
  • -r = video frame rate
  • -s = size of the video display
  • -f= format
  • -vcodec – is the video codec we want to use during the conversion
  • -acodec- It is the audio codec we want to use during the conversion
See, we told you it would be easy.

Now lets try out some more popular options.

Convert videos from .avi to divx format.

ffmpeg -i input.avi -s 320x240 -vcodec msmpeg4v2 output.avi
Convert videos from .avi to dv format

ffmpeg -i input.avi -s pal -r pal -aspect 4:3 -ar 48000 -ac 2 final.dv
Convert videos from .mpg to .avi format

ffmpeg -i input.mpg ouput.avi
Convert videos from wav file to mp3 format

ffmpeg -i input.avi -vn -ar 44100 -ac 2 -ab 192 -f mp3  output.mp3
Convert videos from ogg theora to mpeg format

ffmpeg -i  input.ogm -s 720x576 -vcodec mpeg2video -acodec mp3  ouput .mpg
Changing the Frame Rate

Frame rate can be defined as the number of frames displayed per sec. Frame rate is directly proportional to the speed of the video and is mainly responsible for the animation or continuous stream of images. Phew..!! Now we do not want to sound like your math teacher.
Sometimes it's necessary for the developers to slow down the frame rate of the video , so that they can better understand about the minute changes in the video. This can be done only by including the -r option.
Syntax for the same is

ffmpeg -i inputfile -r 5 outputfile.
This will create the output video with a rate of 5 frames per second. When played, you will observe the slow processing of the video.
Extracting Audio From The Video

Sometimes you only need the audio. FFmpeg is capable of extracting the audio from the videos too .
ffmpeg -i input.avi -vn -ar 44100 -ac 2 -ab 192 -f mp3 audio.mp3
The -vn option holds the key to this command. This option is used when we want to disable the video recording. Executing the upper command will extract the audio file from the input.avi and willl save it under the name of “audio.mp3” .You can add more appropriate options in between if you want to juice the maximum from this niftly tool.
Converting Images To Videos

Now here is a neat little trick.

Suppose you have a collection of images named as say image1.jpg,image2.jpg and you want to view them in the video format. FFmpeg can do this job as well. You can do this by running this command on the shell.
ffmpeg -f image2 -i image%d.jpg output.mpg. 
You can also do this in the reverse. So there you go. Complete video converting in Ubuntu. Play with the options and you are likely to find more cool tricks.
So thats with the geek stuff. If you want a more comprehensive list of options then type the following command in the terminal.

man ffmpeg
WinFF : GUI Client of FFmpeg

Okay. I guess you jumped straight to this one.

If you are not comfortable with the cli , don't worry there is an equivalent GUI client for the FFmpeg called “WinFF” and it is so much more easier to use.
You can install it from the repository in ubuntu by typing
sudo apt-get install winff 
To open it either use type winff in the terminal or you can go to the Applications → Sound & Video → Winff . You will see the screenshot like below

convert video ubuntu.png
As you can see, the GUI makes the setup much more easier.

Input your video by clicking the “Add” button on the top. Then select the video and from the top down menu select the output options according to your need . Then click on the convert button on the top panel and that's it . You are done with the formating. To set the audio video rate you need to use the”options” button. It will display the extra settings panel at the bottom of the output settings just like
convert video ubuntu2.png
WinFF is also cross-platforrm, being an open source, GPL licensed software.


Wednesday, May 18, 2011

FreeNAS 8.0 SAMBA/CIFS user accounts

SkyHi @ Wednesday, May 18, 2011
I have some troubles to configure samba user accounts and user authentication. First of all let me show what have I done already:

I made new dataset for samba share:


I made new user samba with group samba:


I gave user samba all permissions and ownership to dataset:


I configured cifs service with local user authentication:


I configured new samba share:


Enabled CIFS/SMB service.

Mounting share from another linux box:

Code: Select all
smbclient -U samba '\\server\samba' samba1

and got:

Domain=[server] OS=[Unix] Server=[Samba 3.5.6]
tree connect failed: NT_STATUS_ACCESS_DENIED

Also tried:
Code: Select all
mount //server/samba /mnt -o username=samba,password=samba1

and got:

13880: tree connect failed: ERRDOS - ERRnoaccess (Access denied.)
SMB connection failed

In samba log file I found:

[2011/05/03 13:28:40.016645, 2] smbd/service.c:587(create_connection_server_info)
guest user (from session setup) not permitted to access this share (samba)
[2011/05/03 13:28:40.016699, 1] smbd/service.c:678(make_connection_snum)
create_connection_server_info failed: NT_STATUS_ACCESS_DENIED

What have I missed?? All I whant to achieve is that only user samba can mount and access share samba.

This is samba share section from smb.conf:
path = /mnt/storage/samba
printable = no
veto files = /.snap/
comment = samba share
writeable = yes
browseable = no
inherit permissions = no
valid users = samba  ##Windows prompt for authentication


Tuesday, May 17, 2011

Understanding memory usage on Linux

SkyHi @ Tuesday, May 17, 2011
This entry is for those people who have ever wondered, "Why the hell is a simple KDE text editor taking up 25 megabytes of memory?" Many people are led to believe that many Linux applications, especially KDE or Gnome programs, are "bloated" based solely upon what tools like ps report. While this may or may not be true, depending on the program, it is not generally true -- many programs are much more memory efficient than they seem.

What ps reports
The ps tool can output various pieces of information about a process, such as its process id, current running state, and resource utilization. Two of the possible outputs are VSZ and RSS, which stand for "virtual set size" and "resident set size", which are commonly used by geeks around the world to see how much memory processes are taking up.

For example, here is the output of ps aux for KEdit on my computer:
dbunker   3468  0.0  2.7  25400 14452 ?        S    20:19   0:00 kdeinit: kedit

According to ps, KEdit has a virtual size of about 25 megabytes and a resident size of about 14 megabytes (both numbers above are reported in kilobytes). It seems that most people like to randomly choose to accept one number or the other as representing the real memory usage of a process. I'm not going to explain the difference between VSZ and RSS right now but, needless to say, this is the wrong approach; neither number is an accurate picture of what the memory cost of running KEdit is.

Why ps is "wrong"
Depending on how you look at it, ps is not reporting the real memory usage of processes. What it is really doing is showing how much real memory each process would take up if it were the only process running. Of course, a typical Linux machine has several dozen processes running at any given time, which means that the VSZ and RSS numbers reported by ps are almost definitely "wrong". In order to understand why, it is necessary to learn how Linux handles shared libraries in programs.

Most major programs on Linux use shared libraries to facilitate certain functionality. For example, a KDE text editing program will use several KDE shared libraries (to allow for interaction with other KDE components), several X libraries (to allow it to display images and copy and pasting), and several general system libraries (to allow it to perform basic operations). Many of these shared libraries, especially commonly used ones like libc, are used by many of the programs running on a Linux system. Due to this sharing, Linux is able to use a great trick: it will load a single copy of the shared libraries into memory and use that one copy for every program that references it.

For better or worse, many tools don't care very much about this very common trick; they simply report how much memory a process uses, regardless of whether that memory is shared with other processes as well. Two programs could therefore use a large shared library and yet have its size count towards both of their memory usage totals; the library is being double-counted, which can be very misleading if you don't know what is going on.

Unfortunately, a perfect representation of process memory usage isn't easy to obtain. Not only do you need to understand how the system really works, but you need to decide how you want to deal with some hard questions. Should a shared library that is only needed for one process be counted in that process's memory usage? If a shared library is used my multiple processes, should its memory usage be evenly distributed among the different processes, or just ignored? There isn't a hard and fast rule here; you might have different answers depending on the situation you're facing. It's easy to see why ps doesn't try harder to report "correct" memory usage totals, given the ambiguity.

Seeing a process's memory map
Enough talk; let's see what the situation is with that "huge" KEdit process. To see what KEdit's memory looks like, we'll use the pmap program (with the -d flag):
Address   Kbytes Mode  Offset           Device    Mapping
08048000      40 r-x-- 0000000000000000 0fe:00000 kdeinit
08052000       4 rw--- 0000000000009000 0fe:00000 kdeinit
08053000    1164 rw--- 0000000008053000 000:00000   [ anon ]
40000000      84 r-x-- 0000000000000000 0fe:00000
40015000       8 rw--- 0000000000014000 0fe:00000
40017000       4 rw--- 0000000040017000 000:00000   [ anon ]
40018000       4 r-x-- 0000000000000000 0fe:00000
40019000       4 rw--- 0000000000000000 0fe:00000
40027000     252 r-x-- 0000000000000000 0fe:00000
40066000      20 rw--- 000000000003e000 0fe:00000
4006b000    3108 r-x-- 0000000000000000 0fe:00000
40374000     116 rw--- 0000000000309000 0fe:00000
40391000       8 rw--- 0000000040391000 000:00000   [ anon ]
40393000    2644 r-x-- 0000000000000000 0fe:00000
40628000     164 rw--- 0000000000295000 0fe:00000
40651000       4 rw--- 0000000040651000 000:00000   [ anon ]
40652000     100 r-x-- 0000000000000000 0fe:00000
4066b000       4 rw--- 0000000000019000 0fe:00000
4066c000      68 r-x-- 0000000000000000 0fe:00000
4067d000       4 rw--- 0000000000011000 0fe:00000
4067e000       4 rw--- 000000004067e000 000:00000   [ anon ]
4067f000    2148 r-x-- 0000000000000000 0fe:00000
40898000      64 rw--- 0000000000219000 0fe:00000
408a8000       8 rw--- 00000000408a8000 000:00000   [ anon ]
... (trimmed) ...
mapped: 25404K    writeable/private: 2432K    shared: 0K

I cut out a lot of the output; the rest is similar to what is shown. Even without the complete output, we can see some very interesting things. One important thing to note about the output is that each shared library is listed twice; once for its code segment and once for its data segment. The code segments have a mode of "r-x--", while the data is set to "rw---". The Kbytes, Mode, and Mapping columns are the only ones we will care about, as the rest are unimportant to the discussion.

If you go through the output, you will find that the lines with the largest Kbytes number are usually the code segments of the included shared libraries (the ones that start with "lib" are the shared libraries). What is great about that is that they are the ones that can be shared between processes. If you factor out all of the parts that are shared between processes, you end up with the "writeable/private" total, which is shown at the bottom of the output. This is what can be considered the incremental cost of this process, factoring out the shared libraries. Therefore, the cost to run this instance of KEdit (assuming that all of the shared libraries were already loaded) is around 2 megabytes. That is quite a different story from the 14 or 25 megabytes that ps reported.

What does it all mean?
The moral of this story is that process memory usage on Linux is a complex matter; you can't just run ps and know what is going on. This is especially true when you deal with programs that create a lot of identical children processes, like Apache. ps might report that each Apache process uses 10 megabytes of memory, when the reality might be that the marginal cost of each Apache process is 1 megabyte of memory. This information becomes critial when tuning Apache's MaxClients setting, which determines how many simultaneous requests your server can handle (although see one of my past postings for another way of increasing Apache's performance).

It also shows that it pays to stick with one desktop's software as much as possible. If you run KDE for your desktop, but mostly use Gnome applications, then you are paying a large price for a lot of redundant (but different) shared libraries. By sticking to just KDE or just Gnome apps as much as possible, you reduce your overall memory usage due to the reduced marginal memory cost of running new KDE or Gnome applications, which allows Linux to use more memory for other interesting things (like the file cache, which speeds up file accesses immensely).


How to lock down a Windows 2008 Server

SkyHi @ Tuesday, May 17, 2011
Basics: Updates, Firewall, secured Admin account (renamed, very good password). Then get the Best Practices Analyzers for SQL, IIS, and Server; run them and see what their recommendations are.

Microsoft® SQL Server® 2008 R2 Best Practices Analyzer

Best Practices Analyzer

Security Configuration Wizard


PHP “require” Performance

SkyHi @ Tuesday, May 17, 2011
We recently went through a round of performance improvements for our website, and got a significant performance boost off of a relatively small change.
Previously, our code had a section at start of every page:
Each file was a single class of the same name as the file. While we’re well aware of PHP’s ability to Autoload class files, we chose to list each file out is because of a talk by Rasmus Lerdorf. Rasmus discussed file loading performance. In it he mentioned that __autoload causes issues with opcode caching, and as a result will cause a drop in performance.
Speaking of, if you haven’t heard of opcode caching for PHP, stop now and go read up. As simple sudo apt-get install php-apc will give you an order-of-magnitude speedup on your website. There’s no reason for any production server to not be using it.
Anyway, this may have been true when we only had a includes, but now we have 30 files that we were including on every page load! It was time for some performance testing.
It’s also a fairly well-known fact that require_once is much slower than require…I wasn’t thinking when I used require_once. I also tested the difference between those two calls.
I tested 2 pages. First was our homepage, which only requires 3 of these 30 files. Second was an inner page that requires 5. They were tested with ab, and the numbers listed are the mean response times under high concurrency. Lower is faster.
For reference the autoload code used is:
function __autoload($name) {
    require('include/' . $name . '.php');


Homepage (3 required files)
require_once: 1579
require:      1318
__autoload:   578
Inner page (5 required files)
require_once: 1689
require:      1382
__autoload:   658
Wow! Over a 2x speedup…that’s pretty nice. This led me to wonder: what’s the difference in time when we’re only loading the files we need:
only autoload:         618
5 requires + autoload: 530
only 5 requires:       532
Actually having autoload called adds significant overhead to the page, but as would be expected just having it enabled but never invoked doesn’t add any overhead.


The main takeaway: if your primary concern is performance, then any file included on all of your pages should be included with a require. Any file that’s included on fewer pages can be loaded through __autoload, especially if it’s only on a few pages. Also, always use APC and never use require_once unless you absolutely have to.
Also, your situation may be different than the situations you see in performance tests, so always run your own metrics. ab is your friend.


Monday, May 16, 2011

How to use rsync for transferring files under Linux or UNIX

SkyHi @ Monday, May 16, 2011
How do you install and use rsync to synchronize files and directories from one location (or one server) to another location? - A common question asked by new sys admin.
rsync is a free software computer program for Unix and Linux like systems which synchronizes files and directories from one location to another while minimizing data transfer using delta encoding when appropriate. An important feature of rsync not found in most similar programs/protocols is that the mirroring takes place with only one transmission in each direction.

So what is unique about rsync?

It can perform differential uploads and downloads (synchronization) of files across the network, transferring only data that has changed. The rsync remote-update protocol allows rsync to transfer just the differences between two sets of files across the network connection.

How do I install rsync?

Use any one of the following commands to install rsync.

If you are using Debian or Ubuntu Linux, type the following command

# apt-get install rsync
$ sudo apt-get install rsync

If you are using Red Hat Enterprise Linux (RHEL), type the following command

# up2date rsync

If you are using CentOS/Fedora Core Linux, type the following command

# yum install rsync

Always use rsync over ssh

Since rsync does not provide any security while transferring data it is recommended that you use rsync over ssh . This allows a secure remote connection. Now let us see some examples of rsync.

rsync command common options

  • --delete : delete files that don't exist on sender (system)
  • -v : Verbose (try -vv for more detailed information)
  • -e "ssh options" : specify the ssh as remote shell
  • -a : archive mode
  • -r : recurse into directories
  • -z : compress file data

Task : Copy file from a local computer to a remote server

Copy file from /www/backup.tar.gz to a remote server called
$ rsync -v -e ssh /www/backup.tar.gz
sent 19099 bytes  received 36 bytes  1093.43 bytes/sec
total size is 19014  speedup is 0.99
Please note that symbol ~ indicate the users home directory (/home/jerry).

Task : Copy file from a remote server to a local computer

Copy file /home/jerry/webroot.txt from a remote server to a local computer /tmp directory:
$ rsync -v -e ssh /tmp

Task: Synchronize a local directory with a remote directory

$ rsync -r -a -v -e "ssh -l jerry" --delete /local/webroot

Task: Synchronize a remote directory with a local directory

$ rsync -r -a -v -e "ssh -l jerry" --delete /local/webroot

Task: Synchronize a local directory with a remote rsync server

$ rsync -r -a -v --delete rsync:// /home/cvs

Task: Mirror a directory between my "old" and "new" web server/ftp

You can mirror a directory between my "old" ( and "new" web server with the command (assuming that ssh keys are set for password less authentication)
$ rsync -zavrR --delete --links --rsh="ssh -l vivek" /home/lighttpd

Read related previous articles

Other options - rdiff and rdiff-backup

There also exists a utility called rdiff, which uses the rsync algorithm to generate delta files Using rdiff. A utility called rdiff-backup has been created which is capable of maintaining a backup mirror of a file or directory over the network, on another server. rdiff-backup stores incremental rdiff deltas with the backup, with which it is possible to recreate any backup point. Next time I will write about these Utilities :)

rsync for Windows server/XP

Please note if you are using Windows, try any one of the program:
  1. DeltaCopy
  2. NasBackup

Further readings

=> Read rsync man page
=> Official rsync documentation