Everyone get so many unsolicited commercial emails these days that you just become blind at them, at best. Sometimes they are clearly, expressly commercial. Other times, they try to pass through your attention and your spam checker by disguising themselves as legitimate emails. I have a little story about that.
A couple of weeks ago I got yet another spammy mail from. It was evidently sent through a mass mailing and, as such, also included an unsubscribe link, however the guy was trying to legitimate his spam by saying that he approached me specifically because a colleague referred me to him; in addition, I felt that some keywords were added to his message only to make it sound “prettier” or even more legitimate.
I usually don’t spend time on spammers, but when I do I try to do it well. And in this occasion I had a little time to spend on it, and I did.
In the past months I have made several attempts to explore Docker overlay networks, but there were a few pieces to set up before I could really experiment and… well, let’s say that I have probably approached the problem the wrong way and wasted some time along the way. Not again. I have set aside some time and worked agile enough to do the whole job, from start to finish. Nowadays there is little point in creating overlay networks by hand, except that it’s still a good learning experience. And a learning experience with Docker and networking was exactly what I was after.
When I started exploring multi-host Docker networks, Docker was quite different than it is now. In particular, Docker Swarm didn’t exist yet, and there was a certain amount of manual work required in order to create an overlay network, so that containers located in different hosts can communicate.
Before Swarm, in order to set up an overlay network one needed to:
- have at least two docker hosts to establish an overlay network;
- have a supported key/value store available for the docker hosts to sync information;
- configure the docker hosts to use the key/value store;
- create an overlay network on one of the docker host; if everything worked well, the network will “propagate” to the other docker hosts that had registered with the key/value store;
- create named containers on different hosts; then try and ping each other using the names: if everything was done correctly, you would be able to ping the containers through the overlay network.
This looks like simple high-level checklist. I’ll now describe the actual steps needed to get this working, leaving the details of my failuers to the last section of this post.
On March 10th I was in Bologna for Incontro DevOps Italia 2017, the Italian DevOps meeting organized by the great people at BioDec. The three tracks featured several talks in both Italian and English, and first-class international speakers. And, being a conference in Bologna, it also featured first-class local food that no other conference around the world will ever be able to match.
I am guilty for not having considered encrypting my hard drives for too long, I confess. As soon as I joined Telenor Digital (or, actually, early in the process but a bit too late…) I was commanded to encrypt my data and I couldn’t delay any more. To my utter surprise, the process was surprisingly simple in my Debian jessie! Here is a short checklist for your convenience.
As some of my readers already know, I changed jobs in Novembre: I left Opera Software to join Telenor Digital. We have decided not to run any leap second simulation here, so I am not going to publish anything on the subject this year. You can still refer to the post The leap second aftermath for some suggestions I wrote after the latest leap second we had in June/July 2015.
Errata corrige: it’s actually v3! This is what happens when you don’t publish updates for your software for too long…
I took some time this weekend to release an update for cf-deploy. You have now the option to override the configuration hardcoded in the script by means of environment variables. Check the README for the details.
If you don’t know what cf-deploy is, that’s fair 😉 In two words, it’s a Makefile and a Perl front-end to it that makes it easier to pack together a set of files for a configuration management tools and send them to a distribution server. Designed with git and CFEngine in mind, it’s general enough that you can easily adapt it to any version control system and any configuration management tool by simply modifying the Makefile. If it sounds interesting, you are welcome to read Git repository and deployment procedures for CFEngine policies on this same blog. Enjoy!
Back from the holiday season, I have finally found the time to publish a small library on GitHub. It’s called cfengine-tap and can help you writing TAP-compatible tests for your CFEngine policies.
TAP is the test anything protocol. It is a simple text format that test scripts can use to print out the results and test suites can consume. Originally born in the Perl world, it is now supported in many other languages.
Using this library it’s easier to write test suites for your CFEngine policies. Since it’s publicly available on GitHub and published under a GPL license, you are free to use it and welcome to contribute and make it better (please do).
A new leap second will be introduced at the end of 2016. We have six months to get ready, but this time it may be easier than before as several timekeeping software have implemented some “leap smear” algorithm, which seems to be a very popular approach nowadays; e.g.: ntpd, the reference implementation for NTP, seems to have implemented leap smear from version 4.2.8p3 onward.
We’ll see how it goes. Until then… test!
After some lengthy busy times I’ve been able to restart my work on Docker. Last time I played with some containers to create a Consul cluster using three containers running on the same docker host — something you will never want to do in production.
And the reason why I was playing with a Consul cluster on docker was that you need a key/value store to play with overlay networks in Docker, and Consul is one of the supported stores. Besides, Consul is another technology I wanted to play with since the first minute I’ve known it.
To run an overlay network you need more than one Docker host otherwise it’s pretty pointless. That suggested me that it was time to automate the installation of a Docker host, so that I could put together a test lab quickly and also maintain it. And, as always, CFEngine was my friend. The following policy will not work out of the box for you since it uses a number of libraries of mine, but I’m sure you’ll get the idea.
Here’s another quick post about docker, sorry again if it will come out a bit raw.
In my previous post I talked about my first experiments with docker. There was a number of unanswered questions at first, which got an answer through updates to the blog post during the following days. All but one. When talking about a containerized process that needs to log through syslog to an external server, the post concluded:
if the dockerized process itself needs to communicate with a syslog service “on board”, this may not be enough…