Here’s another quick post about docker, sorry again if it will come out a bit raw.
In my previous post I talked about my first experiments with docker. There was a number of unanswered questions at first, which got an answer through updates to the blog post during the following days. All but one. When talking about a containerized process that needs to log through syslog to an external server, the post concluded:
if the dockerized process itself needs to communicate with a syslog service “on board”, this may not be enough…
Let’s restate the problem to make it clear:
- the logs from a docker container are available either thorough the
docker logs
command or through logging drivers; - the log of a docker container consists of the output of the containerized process to the usual filehandles; if the process has no output, no logs are exported; if the process logs to file, those files must be collected either copying them out of the container or through an external volume/filesystem, otherwise they will be lost;
- if we want a containerized process to send its logs through syslog and the process expects to have a syslog daemon on board of the same host (in this case, the same container), you are in trouble.
Well, OK, maybe “in trouble” is a bit too much. Then let’s say that running more than one process in the same container requires a bit more work than the “one container, one process” case. The case of putting more than one process in a container is actually so common that it’s even included in the official documentation. That’s probably why supervisor seems to be so common among docker users.
Since the solution is explained in the documentation I could well use it, but I was more intrigued in understanding what a real init system looks like in a container. So rather than just study and slavishly apply the supervisor approach I decided to research on how to run systemd inside a docker container. It turned out it may not be easy or super-safe, but it’s definitely possible. This is what I did:
- starting from the ubuntu Baseimage running systemd I found on GitHub I built a new image of a Debian jessie running systemd;
- with that new image, I built a proof-of-concept image based on the cf-serverd image described in my previous post, this time running cf-serverd and syslog-ng in the container.
And that worked! According to the logs I actually have cf-serverd and syslog-ng running in the container. For example:
root@tyrrell:/home/bronto# docker run -ti --rm -P --cap-add=SYS_ADMIN -v /sys/fs/cgroup:/sys/fs/cgroup:ro --log-driver=syslog --log-opt tag="poc-systemd" bronto/poc-systemd systemd 215 running in system mode. (+PAM +AUDIT +SELINUX +IMA +SYSVINIT +LIBCRYPTSETUP +GCRYPT +ACL +XZ -SECCOMP -APPARMOR) Detected virtualization 'other'. Detected architecture 'x86-64'. Welcome to Debian GNU/Linux 8 (jessie)! Set hostname to <ffb5129613a3>. [ OK ] Reached target Paths. [ OK ] Created slice Root Slice. [ OK ] Listening on Delayed Shutdown Socket. [ OK ] Listening on Journal Socket (/dev/log). [ OK ] Listening on Journal Socket. [ OK ] Created slice System Slice. [ OK ] Listening on Syslog Socket. [ OK ] Reached target Sockets. [ OK ] Reached target Slices. [ OK ] Reached target Swap. [ OK ] Reached target Local File Systems. Starting Create Volatile Files and Directories... Starting Journal Service... [ OK ] Started Journal Service. [ OK ] Started Create Volatile Files and Directories. [ OK ] Reached target System Initialization. [ OK ] Reached target Timers. [ OK ] Reached target Basic System. Starting CFEngine 3 deamons... Starting Regular background program processing daemon... [ OK ] Started Regular background program processing daemon. Starting /etc/rc.local Compatibility... Starting System Logger Daemon... Starting Cleanup of Temporary Directories... [ OK ] Started /etc/rc.local Compatibility. [ OK ] Started Cleanup of Temporary Directories. [ OK ] Started System Logger Daemon. [ OK ] Started CFEngine 3 deamons. [ OK ] Reached target Multi-User System.
“I want to try that, too! Can you share your dockerfiles?”
Sure thing, just head out to GitHub and enjoy!
Things to be sorted out
The systemd container doesn’t stop properly. For some reason, when you run docker stop
docker sends a signal, which the container defiantly ignores, and waits 10 seconds, then it kills it. Dunno why but hey! It’s just a few days I’m using this contraption!!!
Pingback: issue #22: Debian 8.4, Containers at Google, certdiff, netdata, uphold & many more! - Cron Weekly: a weekly newsletter for Linux and Open Source sysadmins
I’m not sure how things are different in Ubuntu but you may want to check out how fedora docker images manage running systemd. There’s an article explaining some of the problems here https://rhatdan.wordpress.com/2014/04/30/running-systemd-within-a-docker-container/ and an example dockerfile for systemd here https://github.com/fedora-cloud/Fedora-Dockerfiles/blob/ade5aa562ff72419739513dc318c2a8947b2275f/systemd/systemd/Dockerfile
Hi Justin, thanks for commenting!
Daniel Walsh’s article is actually linked in my post, though it’s the instance from Red Hat’s developers blog. It was an important reference, the first one I found that described in detail how it was possible to have systemd running in a container and what are the challenges:
http://developers.redhat.com/blog/2014/05/05/running-systemd-within-docker-container/
I should have put it in the references and I’ll do that shortly when I’m done with this comment.No, hold on, it is in the references! It’s always been! 🙂You may find my `dockerfy` utility useful for reaping zombies, handling signals properly, starting services, pre-running initialization commands before the primary command starts, editing configuration files via templates, overlaying content and managing secrets. https://github.com/markriggins/dockerfy
For example:
dockerfy –secrets-files /secrets/secrets.env
–template /app/nginx.conf.tmpl:/etc/nginx/nginx.conf
–wait ‘tcp://{{ .Env.MYSQLSERVER }}:{{ .Env.MYSQLPORT }}’ –timeout 60s
–run ‘/app/bin/migrate_lock’ –server='{{ .Env.MYSQLSERVER }}:{{ .Env.MYSQLPORT }}’ –password='{{.Secret.MYSQLPASSWORD}}’ —
–start /app/bin/cache-cleaner-daemon —
–stderr /var/log/nginx/error.log
–reap
–user nobody —
nginx “-g daemon off;””
Would do the following:
– load secrets from a file,
– create an nginx.conf file from a template,
– wait up to 60 seconds for a mysql database to start accepting connections
– run a database migration against the mysql database, using a secret password
– start a service named ‘cache-cleaner-daemon’
– start a go routine to reap zombies
– run nginx as user “nobody”
– tail the /var/log/nginx/error.log to stderr
If the database migration fails, then the container will exit without starting nginx. While nginx is running, if the cache-cleaner-daemon dies, the entire container will shut down so the cloud platform can start up another instance.
Hope this helps — You can build from source or use a pre-built binary from my latest github release
Seems it’s almost working to me.
I have a problem.
Some Permission required to do your method.
When i start running container, it return this message below!
Failed to mount tmpfs at /run: Permission denied
I don’t know how to give permission for this.
Can you give me a opinion?
Hi and thanks for commenting. I need to look into that, it’s been a while since I last tried it. I’ll try to set some time aside and let you know. If you find out yourself, please post your solution here please!
Ciao
— bronto
Hi again!
I haven’t had time to look into this and won’t probably have it for a few weeks more. I am sorry about that. If you happen to find out the source cause of the problem you mention, please let us know. Thanks.
— bronto
Try –privileged=true
This might be heavyweight but “–cap-add=SYS_ADMIN” does fix this item.
The container not stopping turned out to be systemd reexec on SIGTERM. The way to go would be “docker kill -s RTMIN+3 ” (note there are other signals for halt and poweroff targets in “man systemd”)