Here’s another quick post about docker, sorry again if it will come out a bit raw.
In my previous post I talked about my first experiments with docker. There was a number of unanswered questions at first, which got an answer through updates to the blog post during the following days. All but one. When talking about a containerized process that needs to log through syslog to an external server, the post concluded:
if the dockerized process itself needs to communicate with a syslog service “on board”, this may not be enough…
This is a quick post, apologies in advance if it will come out a bit raw.
I’ve been reading about docker for a while and even attended the day of docker in Oslo. I decided it was about time to try something myself to get a better understanding of the technology and if it could be something useful for my use cases.
As always, I despise the “hello world” style examples so I leaned immediately towards something closer to a real case: how hard would it be to make CFEngine’s policy hub a docker service? After all it’s just one process (cf-serverd) with all its data (the files in /var/cfengine/masterfiles) which looks like a perfect fit, at least for a realistic test. I went through the relevant parts of the documentation (see “References” below) and I’d say that it pretty much worked and, where it didn’t, I got an understanding of why and how that should be fixed.
Oh, by the way, a run of
docker search cfengine will tell you that I’m not the only one to have played with this 😉
On March 7th I was at the DevOps Norway meetup where both Jan Ivar Beddari and me presented an extended version of the ignite talks we held at Config Management Camp in February. The talks were streamed and recorded through Hangouts and the recording is available on YouTube.
The meeting gave me the opportunity to explain in a bit more detail my viewpoint about why we need a completely new design for our configuration management tools. I had tried already in a blog post that caused some amount of controversy and it was good to get a second chance.
I’d recommend you watch both Jan Ivar’s talk and mine, but if you’re interested only in my part then you can check out here:
And don’t forget to check out the slides, both Jan Ivar’s and mine.
Here’s the video of my ignite talk at Config Management Camp 2016: “the three legs of modern configuration management (…or maybe it’s four)”. The slides of the talk are also available on SpeakerDeck.
It didn’t take many hours for Luke Kanies to pick up my provocative blog post and express his disappointment:
I’m not going to complain for his words: if I was him I would have thought the same things, and maybe also written the same things. At the same time, it’s kind of funny that a lot of the inspiration for that post came from Luke himself. I’ll explain.
CFEngine, Puppet, Chef… and Ansible, Salt… and many others. We have loved them and we have hated them. It’s time we take a picture, because they may be gone in a few years time. This is the word — or the cry — that got out of too many Configuration Management practitioners at this year’s Config Management Camp.
Learning more of systemd has been on my agenda since the release of Debian 8 “Jessie”. With the new year I decided that I had procrastinated enough, I made a plan and started to study according to the plan. Today it was time for action: to verify my understanding of the documentation I read up to now, I decided to put together unit files for CFEngine. It was an almost complete success and the result is now on GitHub for everyone to enjoy. I would appreciate if you’d give them a shot and report back.
Main goals achieved:
- I successfully created three service unit files, one for each of CFEngine’s daemons: cf-serverd, cf-execd and cf-monitord; the units are designed so that if any of the daemon is killed for any reason, systemd will bring it back immediately.
- I successfully created a target unit file that puts together the three service units. When the cfengine3 target is started, the three daemons are requested to start; when the cfengine3 target is stopped, the three daemons are stopped. The cfengine3 target completely replaces the init script functionality.
Goal not achieved: I’ve given a shot at socket activation, so that the activation of cf-serverd was delayed until a connection was initiated to port 5308/TCP. That didn’t work properly: systemd tried to start cf-serverd but it died immediately, and systemd tried and tried again until it was too much. I’ll have to investigate if cf-serverd needs to support socket activation explicitly or if I was doing something wrong. The socket unit is not part of the distribution on GitHub but its content are reported here below. In case you spot any problem please let me know.
I have experienced that when people talk about a system’s configuration, they mostly think of software to be installed and configuration files to be deployed. That’s true, they are part of a system configuration, but there’s more to it — if Configuration Management was only that, you could rightfully call it “provisioning” instead. For example, another part of a system’s configuration is that certain critical services must be running and/or certain other services must not be running. And in fact, any configuration management tool has provisions to manage system services and ensure they are in the desired state (while they may differ a lot on the “when” and “how” and “how often” the state is checked).
CFEngine is no exception. You can take advantage of ready-to-use frameworks like NCF or EFL, or roll your own checks. What I’m presenting you today is a simple bundle that I wrote called
watch_service, that you can use to ensure that certain system services are up or down.
My approach is similar to NCF’s bundle called
service_action in that it tries to provide a generic, system-agnostic bundle to manage services but with a few differences:
service_action relies on information in NCF itself to make the bundle simpler to use, my
watch_service relies only on CFEngine’s standard_services knowledge as available in the standard library;
service_action returns information to the agent in the form of namespace-scoped classes (e.g.: the service was in the desired state, or the service was not in the desired state and the problem has been fixed successfully),
watch_service only reports about the events by means of another bundle called
report, whose code will be also provided in the last part of this post.
service_action supports many different actions,
watch_service only supports “up” (ensure the service is running) or “down” (ensure the service is not running).
A word of caution
This post is addressed to all those people who think they know what DevOps means, but they don’t. If you don’t recognize yourself in this post, then maybe it’s not for you. If you recognize yourself, then beware: you’re going to be insulted: read at your own risk and don’t bother asking for apologies.