A dockerized policy hub

DockerThis is a quick post, apologies in advance if it will come out a bit raw.

I’ve been reading about docker for a while and even attended the day of docker in Oslo. I decided it was about time to try something myself to get a better understanding of the technology and if it could be something useful for my use cases.

As always, I despise the “hello world” style examples so I leaned immediately towards something closer to a real case: how hard would it be to make CFEngine’s policy hub a docker service? After all it’s just one process (cf-serverd) with all its data (the files in /var/cfengine/masterfiles) which looks like a perfect fit, at least for a realistic test. I went through the relevant parts of the documentation (see “References” below) and I’d say that it pretty much worked and, where it didn’t, I got an understanding of why and how that should be fixed.

Oh, by the way, a run of docker search cfengine will tell you that I’m not the only one to have played with this 😉

Building the image

To build the image I created the following directory structure:

cf-serverd
|-- Dockerfile
`-- etc
    `-- apt
        |-- sources.list.d
        |   `-- cfengine-community.list
        `-- trusted.gpg.d
            `-- cfengine.gpg

The cfengine-community.list and the cfengine.gpg files were created according to the instructions in the page Configuring CFEngine Linux Package Repositories. The docker file is as follows:

FROM debian:jessie
MAINTAINER Marco Marongiu
LABEL os.distributor="Debian" os.codename="jessie" os.release="8.3" cfengine.release="3.7.2"
RUN apt-get update ; apt-get install -y apt-transport-https
ADD etc/apt/trusted.gpg.d/cfengine.gpg /tmp/cfengine.gpg
RUN apt-key add /tmp/cfengine.gpg
ADD etc/apt/sources.list.d/cfengine-community.list /etc/apt/sources.list.d/cfengine-community.list
RUN apt-get update && apt-get install -y cfengine-community=3.7.2-1
ENTRYPOINT [ "/var/cfengine/bin/cf-serverd" ]
CMD [ "-F" ]
EXPOSE 5308

With this Dockerfile, running docker build -t bronto/cf-serverd:3.7.2 . in the build directory will create an image bronto/cf-serverd based on the official Debian Jessie image:

  • we update the APT sources and install apt-transport-https;
  • we copy the APT key into the container and run apt-key add on it (I tried copying it in /etc/apt/trusted.gpg.d directly but it didn’t work for some reason);
  • we add the APT source for CFEngine
  • we update the APT sources once again and we install CFEngine version 3.7.2 (if you don’t specify the version, it will install the latest highest version available, 3.8.1 to date);
  • we run cf-serverd and we expose the TCP port 5308, the port used by cf-serverd.

Thus, running docker run -d -P bronto/cf-serverd will create a container that exposes cf-serverd from the host on a random port.

Update: having defined an entry point in cf-serverd, you may have trouble to run a shell in the container to take a look around. There is a solution and is simple indeed:

docker run -ti --entrypoint /bin/bash bronto/cf-serverd:3.7.2

What you should add

[Update: in the first edition of this post this section was a bit different; some of the things I had doubts about are now solved and the section has been mostly rewritten]

The cf-serverd image (and generated containers that go with it) is functional but doesn’t do anything useful, for two reasons:

  1. it doesn’t have a file /var/cfengine/inputs/promises.cf, so cf-serverd doesn’t have a proper configuration
  2. it doesn’t have anything but the default masterfiles in /var/cfengine/masterfiles.

Both of those (a configuration for cf-serverd and a full set of masterfiles) should be added in images built upon this one, so that the new image is fully configured for your specific needs and environment. Besides, you should ensure that /var/cfengine/ppkeys is persistent, e.g. by mounting it in the container from the docker host, so that neither the “hub’s” nor clients’ public keys are lost when you replace the container with a new one. For example, you could have the keys on the docker host at /var/local/cf-serverd/ppkeys, and mount that on the container’s volume via docker run’s -v option:

docker run -p 5308:5308 -d -v/var/local/cf-serverd/ppkeys:/var/cfengine/ppkeys bronto/cf-serverd:3.7.2

A possible deployment workflow

What’s normally done in a standard CFEngine set-up is that policy updates are deployed on the policy hub in /var/cfengine/masterfiles and the hub distributes the updates to the clients. In a dockerized scenario, one would probably build a new image based on the cf-serverd image, adding an updated set of masterfiles and a configuration for cf-serverd, and then instantiate a container from the new image. You would do that every time the policy is updated.

Things to be sorted out

[Update: This section, too, went through a major rewriting after I’ve learnt more about the solutions.]

Is the source IP of a client rewritten?

Problem statement: I don’t know enough about Docker’s networking so I don’t know if the source IP of the clients is rewritten. In that case, cf-serverd’s ACLs would be completely ineffective and a proper firewall should be established on the host.

Solution: the address is rewritten if the connection is made through localhost (the source address appears as the host address on the docker bridge); if the connection is made through a “standard” interface, the address is not rewritten — which is good from the perspective of cf-serverd as that allows it to apply the standard ACLs

Can logs be sent to syslog?

Problem statement: How can one send the logs of a containerized process to a syslog server?

Solution: a docker container can, by itself, send logs to a syslog server, as well as to other logging platforms and destinations, through logging drivers. In the case of our cf-serverd container, the following  command line sends the logs to a standard syslog service on the docker host:
docker run -P -d --log-driver=syslog --log-opt tag="cf-serverd" -v /var/cfengine/ppkeys:/var/cfengine/ppkeys bronto/cf-serverd:3.7.2
This produces log lines like, for example:

Mar 29 15:10:57 tyrrell docker/cf-serverd[728]: notice: Server is starting...

Notice how the tag is reflected in the process name: without that tag, you’d get the short ID of the container, that is something like this:

Mar 29 11:15:27 tyrrell docker/a4f332cfb5e9[728]: notice: Server is starting...

You can also send logs directly to journald on systemd-based systems. You gain some flexibility in that you can filter logs via metadata. However, up to my knowledge, it’s not possible to get a clear log message like the first one above that clearly states that you have cf-serverd running in docker: with journald, all you get when running journalctl --follow _SYSTEMD_UNIT=docker.service is something like:

Mar 29 15:22:02 tyrrell docker[728]: notice: Server is starting...

However, if you started the service in a named container (e.g. with the --name=cf-serverd option) you can still enjoy some cool stuff, e.g. you can see the logs from just that container by running

journalctl --follow CONTAINER_NAME=cf-serverd

Actually, if you ran several containers with the same name at different times, journalctl will return the logs from all of them, which is a nice feature to me. But if all you want is the logs from, say, the currently running container named cf-serverd, you can still filter by CONTAINER_ID. Not bad.

However, if the dockerized process itself needs to communicate with a syslog service “on board”, this may not be enough…

References

 

Advertisement

One thought on “A dockerized policy hub

  1. Pingback: An init system in a Docker container | A sysadmin's logbook

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.