Suggestion for next year budget

While trying to recover some backlog in the SAGE-members mailing list, I've come across this sticker. I would have laughed for it anyway, but the fact that I am currently fiddling with Cacti charts and data actually doubled the amount of laughter 🙂

Don't you think that any Sysadmin group in the world should have a few hundreds of these stickers in stock?

Renaming digital photos, time-wise

It happens sometimes… or at least: it happened to me, and maybe it happened to you also 🙂 Well, anyway, it happens that you get a number of digital photos from different people, and of course their filenames don't match their chronological order. Is it possible to rename the files so that they can help sort this mess? Well, it is!

First, you need a small command line utility called exif. I assume you have an idea of what EXIF is.

With this command it is really easy to extract exif information from a digital photo, for example: the date you took the photo:

$ exif -t 0x9003 LD2K10_005.JPG
EXIF entry 'Data e ora (dati originali)' (0x9003, 'DateTimeOriginal') exists in IFD 'EXIF':
Tag: 0x9003 ('DateTimeOriginal')
  Format: 2 ('Ascii')
  Components: 20
  Size: 20
  Value: 2010:10:23 08:29:01

We have all the information we need, and maybe more. Now we need to get the information in the Value field, mangle it to a more "filename-friendly" format, and rename the file. And it's not that hard:

for FILE in *.JPG
do
  NEW=$(exif -t 0x9003 $FILE | awk '/Value/' | sed -e 's/^  Value: //' -e 's/://g' -e 's/ /-/')
  NEW="$NEW-$FILE"
  mv -v "$FILE" "$NEW"
done

The snippet above actually fits a one-liner:

for FILE in *.JPG ; do NEW=$(exif -t 0x9003 $FILE | awk '/Value/' | sed -e 's/^  Value: //' -e 's/://g' -e 's/ /-/') ; NEW="$NEW-$FILE" ; mv -v "$FILE" "$NEW" ; done

Good luck!

Using a makefile to automate a puppet installation

The first make will back up some files. It is important to run it only once.

The second run, "make" alone, will go through the needed steps for installation: if rsync is not installed, it will install rsync; if the needed files are not present in the /puppet hierarchy, it will download the files using rsync; if puppet is not installed, it will install it; then, it will run puppetd with a bootstrap configuration file and… fail!

Yes, fail. Because you need to sign the host certificate, don't you? Once you signed it on the master node, then you'll run make config. If everything is properly configured, this will set up your host according to the manifests.

The last command, make test will just run a puppetd --test to ensure that everything is properly set up.

And that's all. Installing one host this way takes about 5 minutes, downloads included. And using cssh I could even install puppet on several machines in parallel.

Am I missing something? Oh, yes, the real Makefile 🙂

PUPPETMASTER=i.am.your.master.com
SYNCSERVER=$(PUPPETMASTER)

all: install config

install: /usr/bin/puppetd

config:
        puppetd --config /puppet/common/files/bootstrap/client.conf --server $(PUPPETMASTER) --test

test:
        puppetd --test

clean:
        -rm -rf /puppet
        -apt-get remove puppet facter
        -apt-get autoremove

backups:
        cp /etc/apt/sources.list{,.dist}

/usr/bin/rsync:
        apt-get install rsync

/puppet/usa/files/sources.list: sync-puppet-conf

/puppet/volatile/keys/apt: sync-puppet-conf

update: /puppet/usa/files/sources.list /puppet/volatile/keys/apt
        cp /puppet/usa/files/sources.list /etc/apt/sources.list
        apt-key add /puppet/volatile/keys/apt 
        apt-get update
        apt-key update

sync-puppet-conf: /usr/bin/rsync
        rsync -zav $(SYNCSERVER)::PuppetConf /puppet

/usr/bin/puppetd: update
        apt-get install puppet

Italian Linux Day 2010

It's going to happen again for the tenth consecutive year. It's the Italian Linux Day, and I am going to present at the event in Cagliari.

I talked with a few colleagues about what the Italian Linux Day is, and it seems it's a kind of event that it's peculiar to Italy. That's why I decided to spend some half an hour to write about it in this blog and explain what it is. If you live in a different country than Italy, feel free to copy the idea and spread the word.

LUGs in Italy, and probably in all other countries, too, are more like a galaxy than like a phalanx. They don't coordinate, they don't act all together like an army. They do more or less the same things because they revolve around the same principle: share and spread knowledge about Linux and about Free Software.

What happened in 2001 is that the Italian Linux Society, and Davide Cerri in particular, realized that fact, and proposed the only kind of event that could unite all Italian LUGs together: the Linux Day.

How does it work? That's pretty simple indeed. … Continue reading

It must be simpler than this…

I sat down scratching my head… that ntp client was syncing perfectly in unicast, and didn't create any association once configured in multicast. "Dah, the same old problem", I told to myself, "it's not getting the packets, setting a multicast route will fix it".

So I prepared the usual debugging set: one window running tcpdump 'dst port ntp', one window on the client running watch ntpq -c pe -c as, another one with tail -f /var/log/syslog | grep ntp and a free shell window. To my surprise, as soon as I fired up tcpdump, multicast ntp packets showed up. "What the…?!" I said. … Continue reading

My take into RRDtool

I confess. Every time I found myself in need for some sort of Round-Robin Database, I took a peek at RRDtool, but that always looked too much complicated, so I gave up and crafted my task-specific tools.

But this time was different. There will be large amounts of data to report on, and a fully hand-crafted solution was not an option. So my RRDtool experience started, and I had to make sense of a number of stuff. But that seems much more understandable now, and I am glad I finally tried it (but much more to do, of course…). … Continue reading

Extracting information from iptables/fwbuilder logs

I use Firewall Builder for fast prototyping of my iptables configuration. When a firewall rule matches and logging for that rule is enabled, one line like this is added to /var/log/messages:

Sep  1 09:48:43 server kernel: [9490931.734574] RULE 12 -- DENY IN=eth0 OUT= MAC=a1:b2:c3:d4:e5:f6:00:11:22:33:44:55:66:77 SRC=1.2.3.4 DST=5.6.7.8 LEN=96 TOS=0x00 PREC=0x00 TTL=58 ID=0 DF PROTO=ICMP TYPE=8 CODE=0 ID=27754 SEQ=0 

(sensitive information has been forged, of course 🙂

Depending on the protocol, the same field is not always in the same position, e.g.: destination port (DPT) could be in position 23 or 24. So if you want to list, say, the inward interface, source address, destination address, protocol and destination port you need a smarter matching. This one-liner worked for me:

perl -alne 'if (m{RULE 12}) { my %field ; foreach $token (@F) { next unless $token =~ /=/ ; my ($k,$v) = split(/=/,$token,2) ; $field{$k} = $v } ; print qq{ @field{ qw{IN SRC DST PROTO DPT} } } }' /var/log/messages | sort | uniq -c | sort -nr   

That perl part means: if the line matches "RULE 12" then I initialize the %field hash. Then I go through the tokens and I select those that contain a "=", I split on the equal sign and fill the hash. Finally, when %field is ready, I print the interesting fields.

I don't need to worry about splitting the line and save the "tokens", because perl's autosplit (-a) takes care of it. And I don't need to bother about printing newlines, because -l takes care of it.

And the sort/uniq/sort dance is the old trick to count the occurrences of he same line in the output.

How to count the occurrences of “something” in a file

This is something we need to do from time to time: filter some useful information from a file (e.g., a log file) and count the instances of each "object". While filtering out the relevant information may be tricky and there is no general rule for it, counting the instances and sorting them is pretty simple.

Let's call "filter" the program, chain of pipes, or whatever you used to extract the information you are longing for. Then, the count and sort process is just a chain of three commands:

filter | sort | uniq -c | sort -nr

The first sort is used to group equal instances together; then this is passed to uniq -c, which will "compress" each group to a single line, prefixed by the number of times that instance appeared in the output; then we feed all this to sort again, this time with -n (numeric sort) and -r (reverse sort, so that we have the most frequent instance listed first).

Nice, isn't it? 😉