As some of my readers already know, I changed jobs in Novembre: I left Opera Software to join Telenor Digital. We have decided not to run any leap second simulation here, so I am not going to publish anything on the subject this year. You can still refer to the post The leap second aftermath for some suggestions I wrote after the latest leap second we had in June/July 2015.
Update: this article refers to the third version of cf-deploy. For the latest release, check the github repository.
Errata corrige: it’s actually v3! This is what happens when you don’t publish updates for your software for too long…
I took some time this weekend to release an update for cf-deploy. You have now the option to override the configuration hardcoded in the script by means of environment variables. Check the README for the details.
If you don’t know what cf-deploy is, that’s fair 😉 In two words, it’s a Makefile and a Perl front-end to it that makes it easier to pack together a set of files for a configuration management tools and send them to a distribution server. Designed with git and CFEngine in mind, it’s general enough that you can easily adapt it to any version control system and any configuration management tool by simply modifying the Makefile. If it sounds interesting, you are welcome to read Git repository and deployment procedures for CFEngine policies on this same blog. Enjoy!
A new leap second will be introduced at the end of 2016. We have six months to get ready, but this time it may be easier than before as several timekeeping software have implemented some “leap smear” algorithm, which seems to be a very popular approach nowadays; e.g.: ntpd, the reference implementation for NTP, seems to have implemented leap smear from version 4.2.8p3 onward.
We’ll see how it goes. Until then… test!
After some lengthy busy times I’ve been able to restart my work on Docker. Last time I played with some containers to create a Consul cluster using three containers running on the same docker host — something you will never want to do in production.
And the reason why I was playing with a Consul cluster on docker was that you need a key/value store to play with overlay networks in Docker, and Consul is one of the supported stores. Besides, Consul is another technology I wanted to play with since the first minute I’ve known it.
To run an overlay network you need more than one Docker host otherwise it’s pretty pointless. That suggested me that it was time to automate the installation of a Docker host, so that I could put together a test lab quickly and also maintain it. And, as always, CFEngine was my friend. The following policy will not work out of the box for you since it uses a number of libraries of mine, but I’m sure you’ll get the idea.
Here’s another quick post about docker, sorry again if it will come out a bit raw.
In my previous post I talked about my first experiments with docker. There was a number of unanswered questions at first, which got an answer through updates to the blog post during the following days. All but one. When talking about a containerized process that needs to log through syslog to an external server, the post concluded:
if the dockerized process itself needs to communicate with a syslog service “on board”, this may not be enough…
This is a quick post, apologies in advance if it will come out a bit raw.
I’ve been reading about docker for a while and even attended the day of docker in Oslo. I decided it was about time to try something myself to get a better understanding of the technology and if it could be something useful for my use cases.
As always, I despise the “hello world” style examples so I leaned immediately towards something closer to a real case: how hard would it be to make CFEngine’s policy hub a docker service? After all it’s just one process (cf-serverd) with all its data (the files in /var/cfengine/masterfiles) which looks like a perfect fit, at least for a realistic test. I went through the relevant parts of the documentation (see “References” below) and I’d say that it pretty much worked and, where it didn’t, I got an understanding of why and how that should be fixed.
Oh, by the way, a run of
docker search cfengine will tell you that I’m not the only one to have played with this 😉
On March 7th I was at the DevOps Norway meetup where both Jan Ivar Beddari and me presented an extended version of the ignite talks we held at Config Management Camp in February. The talks were streamed and recorded through Hangouts and the recording is available on YouTube.
The meeting gave me the opportunity to explain in a bit more detail my viewpoint about why we need a completely new design for our configuration management tools. I had tried already in a blog post that caused some amount of controversy and it was good to get a second chance.
I’d recommend you watch both Jan Ivar’s talk and mine, but if you’re interested only in my part then you can check out here:
And don’t forget to check out the slides, both Jan Ivar’s and mine.
Here’s the video of my ignite talk at Config Management Camp 2016: “the three legs of modern configuration management (…or maybe it’s four)”. The slides of the talk are also available on SpeakerDeck.
It didn’t take many hours for Luke Kanies to pick up my provocative blog post and express his disappointment:
I’m not going to complain for his words: if I was him I would have thought the same things, and maybe also written the same things. At the same time, it’s kind of funny that a lot of the inspiration for that post came from Luke himself. I’ll explain.
CFEngine, Puppet, Chef… and Ansible, Salt… and many others. We have loved them and we have hated them. It’s time we take a picture, because they may be gone in a few years time. This is the word — or the cry — that got out of too many Configuration Management practitioners at this year’s Config Management Camp.