Faking a process name


Sometimes, for testing purposes, you may need to pretend you have more processes named, e.g., ntpd than you actually do. In my case, I was testing a cfengine policy that should kill all processes named ntpd if they are more than one, and then start a new ntpd from a clean state.

Doing that in perl is very easy. In fact, not only the $0 variable holds the name of the process, but it works also the other way round: if you assign $0, it will change the process name in the system's process table.

So, to fake an ntpd process, you can create a perl script called ntpd like this one:

#!/usr/bin/perl

$0 = q{ntpd} ;
sleep 600 ;

and you run it.

This trick allowed me to test if my policy worked as expected (it did, by the way 🙂

Can you do it with a one-liner? Of course you can:

perl -e '$0="ntpd" ; sleep 600'

Enjoy!

Advertisement

Say no to letter!

Nothing against the postal service here. Rather, I was trying to turn a small PDF document, four A4 pages, into a booklet that could fit into a single A4 sheet, printed on both sides. Nothing frightful here, it's a simple command chain. Yet, some commands in the chain switch from the A4 format to letter, and I happen not to like their initiative 🙂

If you need to make such a booklet from file.pdf to booklet.pdf, and want it to stay A4 throughout the process, you need to do this:

pdf2ps file.pdf - | psbook | psnup -2 -pa4 | ps2pdf -sPAPERSIZE=a4 - booklet.pdf

For some reason, psnup and ps2pdf think they should turn A4 into letter. To force A4 in psnup, you use the -p option. ps2pdf uses the same options as ghostscript (gs), hence the -sPAPERSIZE=a4 there.
Also notice the dash in pdf2ps file.pdf -: if you don't use the dash, ps2pdf will output to file.ps, the rest of the chain will starve, leaving you with a funny empty booklet.pdf document.
ps2pdf is also a bit picky, and you need to explicitly tell it that it should read from the standard input (the dash) and write to booklet.pdf.

Have fun!

Drawing puppet class relationships with graphviz

In the last few days, I got the impression that my puppet class hierarchy was growing a bit out of control. In particular, it looked like "glue" classes aimed to simplify stuff were actually causing more harm than benefit. How I found out? Well, it was getting too complicated to understand what included what, and walking the classes by hand quickly became a terrible task.

So, I decided to go for something that could do the work for me: go through my modules, get the include statements, and draw the relationships. Luckily, I remembered I saw that graphviz was designed explicitly to ease that latest task. … Continue reading

Quickly graphing RRD data

Still fiddling with RRD, and I'm about at the end of the tunnel 🙂 I finally collected a significant amount of data, and I have scripts to aggregate different sources. What I was missing was a quick way to generate graphs, so that I could visually check if my aggregated data "looks" OK.

As usual, the syntax of rrdtool graph is quite verbose and cryptic, not exactly what you hope when all you need is a quick one-shot graph of some RRD files.

As always, Perl comes to the rescue, this time with RRD::Simple. RRD::Simple is by default able to generate nice graphs with pre-loaded color schemes –if you suck at choosing colors like me, this is a really appreciable feature. It has a set of pre-defined graphs that it can generate, as well, but since it accepts native RRD options (beside its own set), it's actually easy to bend it at your need, and generating a graph just needs a one-liner:

perl -MRRD::Simple -e 'RRD::Simple->graph("aggr_root_storage.rrd", destination => "/tmp", sources => [ qw{perc_used worst_perc_free} ], width => 1024, height => 768, start => 1288566000, end => 1291158000, periods => [ "daily" ] )'  

Or, reindented:

perl -MRRD::Simple -e 
  'RRD::Simple->graph("aggr_root_storage.rrd", 
    destination => "/tmp", 
    sources => [ qw{perc_used worst_perc_free} ], 
    width => 1024, height => 768, 
    start => 1288566000, end => 1291158000, 
    periods => [ "daily" ] )'

The periods options, in this case, has no purpose but to generate only one graph (otherwise you would get many graphs, all equal; why? go and find out yourself, if you really care 😉

And what about plotting a collection of RRDs? It could be something like:

$ for FILE in aggr*.rrd ; do export FILE ; perl -MRRD::Simple -e 'RRD::Simple->graph($ENV{FILE}, destination => "/tmp", width => 1024, height => 768, start => 1288566000, end => 1291158000, periods => [ "daily" ] )' ; done   

or, clearer:

$ for FILE in aggr*.rrd ; 
do 
  export FILE ; 
  perl -MRRD::Simple -e 
    'RRD::Simple->graph($ENV{FILE}, 
      destination => "/tmp", 
      width => 1024, height => 768, 
      start => 1288566000, end => 1291158000, 
      periods => [ "daily" ] )' ; 
done

Renaming digital photos, time-wise

It happens sometimes… or at least: it happened to me, and maybe it happened to you also 🙂 Well, anyway, it happens that you get a number of digital photos from different people, and of course their filenames don't match their chronological order. Is it possible to rename the files so that they can help sort this mess? Well, it is!

First, you need a small command line utility called exif. I assume you have an idea of what EXIF is.

With this command it is really easy to extract exif information from a digital photo, for example: the date you took the photo:

$ exif -t 0x9003 LD2K10_005.JPG
EXIF entry 'Data e ora (dati originali)' (0x9003, 'DateTimeOriginal') exists in IFD 'EXIF':
Tag: 0x9003 ('DateTimeOriginal')
  Format: 2 ('Ascii')
  Components: 20
  Size: 20
  Value: 2010:10:23 08:29:01

We have all the information we need, and maybe more. Now we need to get the information in the Value field, mangle it to a more "filename-friendly" format, and rename the file. And it's not that hard:

for FILE in *.JPG
do
  NEW=$(exif -t 0x9003 $FILE | awk '/Value/' | sed -e 's/^  Value: //' -e 's/://g' -e 's/ /-/')
  NEW="$NEW-$FILE"
  mv -v "$FILE" "$NEW"
done

The snippet above actually fits a one-liner:

for FILE in *.JPG ; do NEW=$(exif -t 0x9003 $FILE | awk '/Value/' | sed -e 's/^  Value: //' -e 's/://g' -e 's/ /-/') ; NEW="$NEW-$FILE" ; mv -v "$FILE" "$NEW" ; done

Good luck!

Extracting information from iptables/fwbuilder logs

I use Firewall Builder for fast prototyping of my iptables configuration. When a firewall rule matches and logging for that rule is enabled, one line like this is added to /var/log/messages:

Sep  1 09:48:43 server kernel: [9490931.734574] RULE 12 -- DENY IN=eth0 OUT= MAC=a1:b2:c3:d4:e5:f6:00:11:22:33:44:55:66:77 SRC=1.2.3.4 DST=5.6.7.8 LEN=96 TOS=0x00 PREC=0x00 TTL=58 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=27754 SEQ=0 

(sensitive information has been forged, of course 🙂

Depending on the protocol, the same field is not always in the same position, e.g.: destination port (DPT) could be in position 23 or 24. So if you want to list, say, the inward interface, source address, destination address, protocol and destination port you need a smarter matching. This one-liner worked for me:

perl -alne 'if (m{RULE 12}) { my %field ; foreach $token (@F) { next unless $token =~ /=/ ; my ($k,$v) = split(/=/,$token,2) ; $field{$k} = $v } ; print qq{ @field{ qw{IN SRC DST PROTO DPT} } } }' /var/log/messages | sort | uniq -c | sort -nr   

That perl part means: if the line matches "RULE 12" then I initialize the %field hash. Then I go through the tokens and I select those that contain a "=", I split on the equal sign and fill the hash. Finally, when %field is ready, I print the interesting fields.

I don't need to worry about splitting the line and save the "tokens", because perl's autosplit (-a) takes care of it. And I don't need to bother about printing newlines, because -l takes care of it.

And the sort/uniq/sort dance is the old trick to count the occurrences of he same line in the output.

How to count the occurrences of “something” in a file

This is something we need to do from time to time: filter some useful information from a file (e.g., a log file) and count the instances of each "object". While filtering out the relevant information may be tricky and there is no general rule for it, counting the instances and sorting them is pretty simple.

Let's call "filter" the program, chain of pipes, or whatever you used to extract the information you are longing for. Then, the count and sort process is just a chain of three commands:

filter | sort | uniq -c | sort -nr

The first sort is used to group equal instances together; then this is passed to uniq -c, which will "compress" each group to a single line, prefixed by the number of times that instance appeared in the output; then we feed all this to sort again, this time with -n (numeric sort) and -r (reverse sort, so that we have the most frequent instance listed first).

Nice, isn't it? 😉