Exploring Docker overlay networks

Docker In the past months I have made several attempts to explore Docker overlay networks, but there were a few pieces to set up before I could really experiment and… well, let’s say that I have probably approached the problem the wrong way and wasted some time along the way. Not again. I have set aside some time and worked agile enough to do the whole job, from start to finish. Nowadays there is little point in creating overlay networks by hand, except that it’s still a good learning experience. And a learning experience with Docker and networking was exactly what I was after.

When I started exploring multi-host Docker networks, Docker was quite different than it is now. In particular, Docker Swarm didn’t exist yet, and there was a certain amount of manual work required in order to create an overlay network, so that containers located in different hosts can communicate.

Before Swarm, in order to set up an overlay network one needed to:

  • have at least two docker hosts to establish an overlay network;
  • have a supported key/value store available for the docker hosts to sync information;
  • configure the docker hosts to use the key/value store;
  • create an overlay network on one of the docker host; if everything worked well, the network will “propagate” to the other docker hosts that had registered with the key/value store;
  • create named containers on different hosts; then try and ping each other using the names: if everything was done correctly, you would be able to ping the containers through the overlay network.

This looks like simple high-level checklist. I’ll now describe the actual steps needed to get this working, leaving the details of my failuers to the last section of this post.

Continue reading

A quick guide to encrypting an external drive

luks-logoI am guilty for not having considered encrypting my hard drives for too long, I confess. As soon as I joined Telenor Digital (or, actually, early in the process but a bit too late…) I was commanded to encrypt my data and I couldn’t delay any more. To my utter surprise, the process was surprisingly simple in my Debian jessie! Here is a short checklist for your convenience.

Continue reading

How I configure a docker host with CFEngine

DockerAfter some lengthy busy times I’ve been able to restart my work on Docker. Last time I played with some containers to create a Consul cluster using three containers running on the same docker host — something you will never want to do in production.

And the reason why I was playing with a Consul cluster on docker was that you need a key/value store to play with overlay networks in Docker, and Consul is one of the supported stores. Besides, Consul is another technology I wanted to play with since the first minute I’ve known it.

To run an overlay network you need more than one Docker host otherwise it’s pretty pointless. That suggested me that it was time to automate the installation of a Docker host, so that I could put together a test lab quickly and also maintain it. And, as always, CFEngine was my friend. The following policy will not work out of the box for you since it uses a number of libraries of mine, but I’m sure you’ll get the idea.

Continue reading

An init system in a Docker container

DockerHere’s another quick post about docker, sorry again if it will come out a bit raw.

In my previous post I talked about my first experiments with docker. There was a number of unanswered questions at first, which got an answer through updates to the blog post during the following days. All but one. When talking about a containerized process that needs to log through syslog to an external server, the post concluded:

if the dockerized process itself needs to communicate with a syslog service “on board”, this may not be enough…

Continue reading

A dockerized policy hub

DockerThis is a quick post, apologies in advance if it will come out a bit raw.

I’ve been reading about docker for a while and even attended the day of docker in Oslo. I decided it was about time to try something myself to get a better understanding of the technology and if it could be something useful for my use cases.

As always, I despise the “hello world” style examples so I leaned immediately towards something closer to a real case: how hard would it be to make CFEngine’s policy hub a docker service? After all it’s just one process (cf-serverd) with all its data (the files in /var/cfengine/masterfiles) which looks like a perfect fit, at least for a realistic test. I went through the relevant parts of the documentation (see “References” below) and I’d say that it pretty much worked and, where it didn’t, I got an understanding of why and how that should be fixed.

Oh, by the way, a run of docker search cfengine will tell you that I’m not the only one to have played with this 😉

Continue reading

systemd unit files for CFEngine

systemd logoLearning more of systemd has been on my agenda since the release of Debian 8 “Jessie”. With the new year I decided that I had procrastinated enough, I made a plan and started to study according to the plan. Today it was time for action: to verify my understanding of the documentation I read up to now, I decided to put together unit files for CFEngine. It was an almost complete success and the result is now on GitHub for everyone to enjoy. I would appreciate if you’d give them a shot and report back.

Main goals achieved:

  1. I successfully created three service unit files, one for each of CFEngine’s daemons: cf-serverd, cf-execd and cf-monitord; the units are designed so that if any of the daemon is killed for any reason, systemd will bring it back immediately.
  2. I successfully created a target unit file that puts together the three service units. When the cfengine3 target is started, the three daemons are requested to start; when the cfengine3 target is stopped, the three daemons are stopped. The cfengine3 target completely replaces the init script functionality.

Goal not achieved: I’ve given a shot at socket activation, so that the activation of cf-serverd was delayed until a connection was initiated to port 5308/TCP. That didn’t work properly: systemd tried to start cf-serverd but it died immediately, and systemd tried and tried again until it was too much. I’ll have to investigate if cf-serverd needs to support socket activation explicitly or if I was doing something wrong. The socket unit is not part of the distribution on GitHub but its content are reported here below. In case you spot any problem please let me know.

Continue reading

How to make CFEngine recognize if systemd is used in Debian

CFEngine 3.6 tries to understand if a Linux is using systemd as init system by looking at the contents of /proc/1/cmdline, that happens in bundle common inventory_linux. That’s indeed a smart thing to do but unfortunately fails on Debian Jessie, where you have:

root@cf-test-v10:~# ls -l /sbin/init
lrwxrwxrwx 1 root root 20 May 26 06:07 /sbin/init -> /lib/systemd/systemd

the pseudo-file in /proc will still report /sbin/init and as a result the systemd class won’t be set. This affects services promises negatively and therefore I needed to make our policies try to outsmart the inventory 😉 These promises, added in a bundle of ours, did the trick:

bundle common debian_info {
  vars:
    init_is_link::
      "init_link_destination"
          string => filestat("/sbin/init","linktarget") ;

  classes:
    init_is_link::
      "systemd"
          expression => regcmp("/lib/systemd/systemd",
                               "$(init_link_destination)"),
          comment => "Check if /sbin/init links to systemd" ;

    debian::
      "init_is_link"
          expression => islink("/sbin/init"),
          comment => "Detect if init is a link" ;
}

Notice that our bundle is actually bigger, I cut off all the promises that were not relevant for this post. Enjoy!

A humble attempt to work around the leap second, 2015 edition

TurnBackTimeUpdate: Watch out for public servers not announcing the leap second! In the last few minutes we have been observing a number of public servers (even stratum 1) that don’t announce the leap second. If the majority of your upstream doesn’t announce the leap second, your clients won’t trigger it. If that’s your case, you can use ntpd’s leapfile directive and a leap second file to provide your own servers with the correct information. Check the ntpd documentation for more information.

Update: Miroslav Lichvar has counted the public servers that are announcing the leap second on a per-country basis. You can find his stats on pastebin.


I have been running simulations for the upcoming leap second for a few weeks now. While some mysteries haven’t been solved yet, I was finally able to put together a configuration for our servers and clients that satisfies to the following requirements (where do these requirements come from? That is explained further down in the article):

  1. it works on Debian Linux Squeeze, Wheezy and Jessie
  2. it keeps the Linux kernel out of the game, in order to avoid triggering unknown kernel bugs
  3. it avoids backward steps of the clock
  4. the clock converges to the right time in an acceptable amount of hours
  5. it doesn’t hog public services

What this solution doesn’t provide: this is neither Google’s leap smear nor Amazon’s: you use standard ntpd code with no changes; this is not a fast clock slew as chrony’s either. Servers/clients have evolved predictably during most of the simulations and shouldn’t diverge too much from each other, but there are conditions where you may observe offsets between them in the order of magnitude of 0.1s. That should still be bearable though and will still save you from the headache of kernel bugs or jumps back in time. In order to work properly, this solution must make a few assumptions:

  1. you have at least four internal NTP servers, synchronized with at least four public servers and/or internal specialized time sources
  2. your clients use at least four of your own internal NTP servers and no external NTP server
  3. you use unicast NTP packets (broadcast and multicast will probably work as well or even better, but they haven’t been tested in my simulations)
  4. you are using ntpd (the reference implementation) version 4.2.8p3 (earlier versions have a bug that will make our countermeasures against clock stepping ineffective)

Let’s look at the implementation on both server and client side, which is pretty similar but with a few important differences. Continue reading

Safer package installations with APT and CFEngine

CFEngineAgentPackage installation can be tricky sometimes when using configuration management tools, as the order in which package operations are performed can have an impact on the final result, sometimes a disastrous impact. Months ago I had been looking for a way to make apt-get a bit less proactive when trying to solve dependencies and removing packages and came up with the following package method that we now have in our library.

To use it, just put it in your policies and use apt_get_safe in your policies instead of apt_get wherever you want a more prudential approach to package installations.

I’m putting it here in the hope that it may be useful for everyone. I have used it successfully in Debian 5, 6, 7, 8 and on CFEngine 3.4.4 (where I borrowed parts of the masterfiles from 3.5 and 3.6 like, e.g., the debian_knowledge bundle) and 3.6.x. Enjoy!

Continue reading

A small leap for a clock, a giant leap for mankind

TurnBackTimeIt’s going to happen again: IERS’ Bulletin C was published yesterday announcing that we’ll have a leap second at the end of June this year. It’s time to get ready, but we this time we are luckier than in 2012: leapocalypse’s scars were deep and still hurt, and only inexperienced sysadmins or experienced idiots will deny the need of preparing a strategy to mitigate the unavoidable bugs that will be triggered by the leap second insertion. Or even by the leap second announcement, like it happened in 2012.

Which bugs? Nobody knows. We have six months left during which you could review the source code of your OS’ and software. Or maybe not, but you can at least test what happens on your systems when the leap second is announced and when it is inserted.

Like in 2012, I’ll soon start testing the behavior of both ntpd and Linux. The test framework will be the same as last time, but with the added wisdom gained fighting against leapocalypse. This is what I plan to add to the picture:

  • more than one NTP test server: either three or four, to measure the convergence speed more accurately;
  • test under heavy load;
  • test on at least some of the software that we run;
  • test with the kernel discipline disabled to compensate for kernel bugs;
  • test on both stock ntpd and the recently released 4.2.8;
  • test on Debian Jessie, since it will be surely released before the leap second takes place;
  • prepare CFEngine policies to handle the leap second and test those as well.

Like last time I’ll share my results. If you’ll be doing similar tests in your environment, I’ll be more than glad to see your findings. This is going to affect us all, the more information we share, the more chances we have to overcome the bugs.

Image from http://writing.wikinut.com/img/3bguk3_7i7al8s9b/If-We-Could-Turn-Back-Time