To this day, 90% of the programmers I talk to have never used awk. Knowing 10% of awk's already small syntax, which you can pick up in just a few minutes, will dramatically increase your ability to quickly manipulate data in text files. Below I'll teach you the most useful stuff - not the "fundamentals", but the 5 minutes worth of practical stuff that will get you most of what I think is interesting in this little language.
Awk is a fun little programming language. It is designed for processing input strings. A (different) prof once asked my networking class to implement code that would take a spec for an RPC service and generate stubs for the client and the server. This professor made the mistake of telling us we could implement this in any language. I decided to write the generator in Awk, mostly as an excuse to learn more Awk. Surprisingly to me, the code ended up much shorter and much simpler than it would have been in any other language I've ever used (Python, C++, Java, ...). There is enough to learn about Awk to fill half a book, and I've read that book, but you're unlikely to be writing a full-fledged spec parser in Awk. Instead, you just want to do things like find all of your log lines that come from ip addresses whose components sum up to 666, for kicks and grins. Read on!
For our examples, assume we have a little file (logs.txt) that looks like the one below. If it wraps in your browser, this is just 2 lines of logs each staring with an ip address.
07.46.199.184 [28/Sep/2010:04:08:20] "GET /robots.txt HTTP/1.1" 200 0 "msnbot"
123.125.71.19 [28/Sep/2010:04:20:11] "GET / HTTP/1.1" 304 - "Baiduspider"
These are just two log records generated by Apache, slightly simplified, showing Bing and Baidu wandering around on my site yesterday.
Awk works like anything else (ie: grep) on the command line. It reads from stdin and writes to stdout. It's easy to pipe stuff in and out of it. The command line syntax you care about is just the command awk followed by a string that contains your program.
awk '{print $0}'
Most Awk programs will start with a "{" and end with a "}". Everything in between there gets run once on each line of input. Most awk programs will print something. The program above will print the entire line that it just read, print appends a newline for free. $0 is the entire line. So this program is an identity operation - it copies the input to the output without changing it.
Awk parses the line in to fields for you automatically, using any whitespace (space, tab) as a delimiter, merging consecutive delimiters. Those fields are available to you as the variables $1, $2, $3, etc.
echo 'this is a test' | awk '{print $3}' // prints 'a'
awk '{print $1}' logs.txt
Output:
07.46.199.184
123.125.71.19
Easy so far, and already useful. Sometimes I need to print from the end of the string though instead. The special variable, NF, contains the number of fields in the current line. I can print the last field by printing the field $NF or I can just manipulate that value to identify a field based on it's position from the last. I can also print multiple values simultaneously in the same print statement.
echo 'this is a test' | awk '{print $NF}' // prints "test"
awk '{print $1, $(NF-2) }' logs.txt
Output:
07.46.199.184 200
123.125.71.19 304
More progress - you can see how, in moments, you could strip this log file to just the fields you are interested in. Another cool variable is NR, which is the row number being currently processed. While demonstrating NR, let me also show you how to format a little bit of output using print. Commas between arguments in a print statement put spaces between them, but I can leave out the comma and no spaces are inserted.
awk '{print NR ") " $1 " -> " $(NF-2)}' logs.txt
Output:
1) 07.46.199.184 -> 200
2) 123.125.71.19 -> 304
Powerful, but nothing hard yet, I hope. By the way, there is also a printf function that works much the way you'd expect if you prefer that form of formatting. Now, not all files have fields that are separated with whitespace. Let's look at the date field:
$ awk '{print $2}' logs.txt
Output:
[28/Sep/2010:04:08:20]
[28/Sep/2010:04:20:11]
The date field is separated by "/" and ":" characters. I can do the following within one awk program, but I want to teach you simple things that you can string together using more familiar unix piping because it's quicker to pick up a small syntax. What I'm going to do is pipe the output of the above command through another awk program that splits on the colon. To do this, my second program needs two {} components. I don't want to go into what these mean, just to show you how to use them for splitting on a different delimiter.
$ awk '{print $2}' logs.txt | awk 'BEGIN{FS=":"}{print $1}'
Output:
[28/Sep/2010
[28/Sep/2010
I just specified that I wanted a different FS (field separator) of ":" and that I wanted to then print the first field. No more time, just dates! The simplest way to get rid of that prefix [ character is with sed, which you are likely already familiar with:
$ awk '{print $2}' logs.txt | awk 'BEGIN{FS=":"}{print $1}' | sed 's/\[//'
Output:
28/Sep/2010
28/Sep/2010
I can further split this on the "/" character if I want using the exact same trick, but I think you get the point. Next, lets learn just a tiny bit of logic. If I want to return only the 200 status lines, I could use grep, but I might end up with an ip address that contains 200, or a date from year 2000. I could first grab the 200 field with Awk and then grep, but then I lose the whole line's context. Awk supports basic if statements. Lets see how I might use one:
$ awk '{if ($(NF-2) == "200") {print $0}}' logs.txt
Output:
07.46.199.184 [28/Sep/2010:04:08:20] "GET /robots.txt HTTP/1.1" 200 0 "msnbot"
There we go, returning only the lines (in this case only one) with a 200 status. The if syntax should be very familiar and require no explanation. Let me finish up by showing you one stupid example of awk code that maintains state across multiple lines. Lets say I want to sum up all of the status fields in this file. I can't think of a reason I'd want to do this for statuses in a log file, but it makes a lot of sense in other cases like summing up the total bytes returned across all of the logs in a day or something. To do this, I just create a variable which automatically will persist across multiple lines:
$ awk '{a+=$(NF-2); print "Total so far:", a}' logs.txt
Output:
Total so far: 200
Total so far: 504
Nothing doing. Obviously in most cases, I'm not interested in cumulative values but only the final value. I can of course just use tail -n1, but I can also print stuff after processing the final line using an END clause:
$ awk '{a+=$(NF-2)}END{print "Total:", a}' logs.txt
Output:
Total: 504
If you want to read more about awk, there are several good books and plenty of online references. You can learn just about everything there is to know about awk in a day with some time to spare. Getting used to it is a bit more of a challenge as it really is a little bit different of a way to code - you are essentially writing only the inner part of a for loop. Come to think of it, this is a lot like how MapReduce feels, which is also initially disorienting.
I hope some of that was useful. If you found it to be so, leave a comment to let me know, I enjoy the feedback if nothing else.
Update Sep 30, 2010: There are some great comments elsewhere in addition to here. I wish they would end up in one place, but the best I can do currently is to link to them:
Update Jan 2, 2011: This post caught the interest of Hacker Monthly who republished it in issue #8. You can grab the pdf version of this article courtesy of Lim Cheng Soon, Hacker News' Founder.
REFERENCES