An humble attempt to work around the leap second

Note: this articleย is now obsolete, please have a look at A humble attempt to work around the leap second, 2015 edition. Thanks.


Some background
Back in March, I talked about the experiments I was conducting to manage the leap second coming at the end of June 30th, 2012. Despite the fact that the leap second was first introduced in the early 70s, and that we never had a negative leap second up to date, a number of applications and systems still rely on some wrong assumptions, namely:

  • every minute always lasts 60 seconds
  • time read from the system clock is monotonic
  • two consecutive reads of a UNIX timestamp, happening at least one second after the other, will result in the second timestamp being bigger than the first one (rephrase of the previous point in the UNIX/POSIX world)

So bad that, after exactly fourty years from the first leap second, systems and applications still rely on these assumptions and can crash badly when, during a leap second insertion, they find themselves in a situation they didn’t expect.

David Mills, the inventor of NTP, in his document “The NTP Timescale and Leap Seconds” suggests how it should implemented on all systems that always assume 60-second minutes. If that was correctly implemented in, e.g., the Linux kernel, we’d have no need to work around any issue, as time would still be monotonic during the leap second transition. Unfortunately, that is not the case, and Linux will suddenly step back one second when the clock reaches July 1st, 2012, 00:00:00.

The procedure described below will help you to avoid the step, and to recover for the excess second the clock will find itself to have compared to its time sources. However, this procedure is far from ideal in a number of situations, and if you decide to apply it on your systems you do it at your own risk. My advice is: go for this procedure only where the risk of having a system crash due to a leap second is higher than the risk of a misbehavior due to two systems having an offset of some tenths of a second; and do that only after some testing. … Continue reading

Up-to-date information and tutorials about Perl

I don't like link collection pages, but I can make one exception for a good reason. And I have one.

A well known star in the Perl community is encouraging Perlers all around the world to give visibility to a number of interesting tutorials and news sites. Too often, obsolete crap pops out of google searches about Perl, and it's time to hint the search engines that there is something newer and better around.

So, for your and my pleasure, here you are:

Perl Tutorials

Perl news

Faking a process name


Sometimes, for testing purposes, you may need to pretend you have more processes named, e.g., ntpd than you actually do. In my case, I was testing a cfengine policy that should kill all processes named ntpd if they are more than one, and then start a new ntpd from a clean state.

Doing that in perl is very easy. In fact, not only the $0 variable holds the name of the process, but it works also the other way round: if you assign $0, it will change the process name in the system's process table.

So, to fake an ntpd process, you can create a perl script called ntpd like this one:

#!/usr/bin/perl

$0 = q{ntpd} ;
sleep 600 ;

and you run it.

This trick allowed me to test if my policy worked as expected (it did, by the way ๐Ÿ™‚

Can you do it with a one-liner? Of course you can:

perl -e '$0="ntpd" ; sleep 600'

Enjoy!

Professional achievements and code readability: a true story

Awards may not be always on time, but sometimes they come. And when awards come from your co-workers, who often understand your job better than your boss does, that’s even better. That happened to me today, and I think it is a story that is worth telling.

At one of my previous employers, we needed to mass-load accounts into a custom mail system, which used a relational database to handle the userbase (why not an LDAP directory? Don’t ask!). Customers would send us two CSV files (or Excel files which we would convert into CSV) with the accounts information, and I wrote a Perl program (and a handful of classes) that would do just that: read the two files, create the accounts in the DB, and create a shell script to fix the filesystem part. … Continue reading

Restoring Xen’s iptables rules

Those of you that use Xen may have noticed that, by default, Xen adds some iptables rules when a VM starts, so to ensure that some specific packets are actually forwarded to the virtual machines. If, for any reason, those rules are wiped away, it would be nice to recover them, wouldn't it?

I found out it's quite easy. The following script will just echo the iptables commands so you can safely test it on a running dom0. If it does something that you actually need, just wipe those echo's away!

#!/bin/bash

xm list | perl -alne 'next if not $F[1] > 0 ; print "@F[0,1]"' | while read VM ID 
do
  xm network-list $ID | perl -alne 'next if not $F[0] =~ m{^d+$} ; print $F[0]' | while read IFID
  do
    VIF="vif$ID.$IFID"
    echo iptables -A FORWARD -m physdev --physdev-in $VIF -s $VM -j ACCEPT
    echo iptables -A FORWARD -m physdev --physdev-in $VIF -p udp --sport bootpc --dport bootps -j ACCEPT
  done
done

I am using Perl here because I know it better than awk, but I am sure that awk can accomplish the same task as well as perl does.

Bug in AppConfig 1.56?

For example, suppose you define three scalar variables, named alpha, beta and gamma, and suppose that when you set them you first declare an unknown variable named delta. E.g.: you have this configuration file:

delta = D
alpha = A
beta  = B
gamma = G

Depending on the value of CREATE and PEDANTIC, you'll have different results, namely:

CREATE PEDANTIC alpha beta gamma delta
not set not set A B G undef
not set set to 0 A B G undef
not set set to 1 undef undef undef undef
set to ^[a-z_] not set A B G D
set to ^[a-z_] set to 0 A B G D
set to ^[a-z_] set to 1 A B G D

But if you run your script setting stuff in the same way, e.g.:

script.pl -delta D -alpha A -beta B -gamma G

results are completely different, namely:

CREATE PEDANTIC alpha beta gamma delta
not set not set undef undef undef undef
not set set to 0 undef undef undef undef
not set set to 1 undef undef undef undef
set to ^[a-z_] not set undef undef undef undef
set to ^[a-z_] set to 0 undef undef undef undef
set to ^[a-z_] set to 1 undef undef undef undef

The only values that correspond in the two cases are those marked in yellow. Is this the intended behaviour? I don't know, I hope I'll have an authoritative response from the module authors.

Drawing puppet class relationships with graphviz

In the last few days, I got the impression that my puppet class hierarchy was growing a bit out of control. In particular, it looked like "glue" classes aimed to simplify stuff were actually causing more harm than benefit. How I found out? Well, it was getting too complicated to understand what included what, and walking the classes by hand quickly became a terrible task.

So, I decided to go for something that could do the work for me: go through my modules, get the include statements, and draw the relationships. Luckily, I remembered I saw that graphviz was designed explicitly to ease that latest task. … Continue reading

New module on CPAN: Log::Stderr

I published my fourth module on CPAN today: Log::Stderr. As the name says, it will write timestamped log messages to STDERR.

Like the other three I published back in 2004, it's not rocket science. But it is something that I was using to help me debug small scripts, and I found it very convenient. So I am publishing it in the hope that it will be useful for someone else. Of course this module won't be much useful if you have more than basic needs, but it's not a problem: there are so many good modules in the Log:: hierarchy! And if you are in doubt about which one to use, well, visit "the Monastery" and ask the monks ๐Ÿ˜‰

Enjoy!

Quickly graphing RRD data

Still fiddling with RRD, and I'm about at the end of the tunnel ๐Ÿ™‚ I finally collected a significant amount of data, and I have scripts to aggregate different sources. What I was missing was a quick way to generate graphs, so that I could visually check if my aggregated data "looks" OK.

As usual, the syntax of rrdtool graph is quite verbose and cryptic, not exactly what you hope when all you need is a quick one-shot graph of some RRD files.

As always, Perl comes to the rescue, this time with RRD::Simple. RRD::Simple is by default able to generate nice graphs with pre-loaded color schemes –if you suck at choosing colors like me, this is a really appreciable feature. It has a set of pre-defined graphs that it can generate, as well, but since it accepts native RRD options (beside its own set), it's actually easy to bend it at your need, and generating a graph just needs a one-liner:

perl -MRRD::Simple -e 'RRD::Simple->graph("aggr_root_storage.rrd", destination => "/tmp", sources => [ qw{perc_used worst_perc_free} ], width => 1024, height => 768, start => 1288566000, end => 1291158000, periods => [ "daily" ] )'  

Or, reindented:

perl -MRRD::Simple -e 
  'RRD::Simple->graph("aggr_root_storage.rrd", 
    destination => "/tmp", 
    sources => [ qw{perc_used worst_perc_free} ], 
    width => 1024, height => 768, 
    start => 1288566000, end => 1291158000, 
    periods => [ "daily" ] )'

The periods options, in this case, has no purpose but to generate only one graph (otherwise you would get many graphs, all equal; why? go and find out yourself, if you really care ๐Ÿ˜‰

And what about plotting a collection of RRDs? It could be something like:

$ for FILE in aggr*.rrd ; do export FILE ; perl -MRRD::Simple -e 'RRD::Simple->graph($ENV{FILE}, destination => "/tmp", width => 1024, height => 768, start => 1288566000, end => 1291158000, periods => [ "daily" ] )' ; done   

or, clearer:

$ for FILE in aggr*.rrd ; 
do 
  export FILE ; 
  perl -MRRD::Simple -e 
    'RRD::Simple->graph($ENV{FILE}, 
      destination => "/tmp", 
      width => 1024, height => 768, 
      start => 1288566000, end => 1291158000, 
      periods => [ "daily" ] )' ; 
done