Down the rabbit hole: installing software

Preface

This article is about using configuration management to install software on your own computers (e.g. your laptops, or the computers used by your family and relatives) and how the complexity of this task is easy to overlook, no matter if you are a newbie or an expert.

If you already know about configuration management and how it makes sense to use it at a small scale like, again, your own computers or your family’s, you can just skip at the section “New job, new setup”.

If you already know about configuration management and you are asking yourself why it should make sense to use it at a small scale, I suggest that you start a section earlier, at Personal configuration management”.

If you are new to configuration management, or you wonder what could be difficult in installing software on a set of systems, I suggest that you read the whole article.

In any case, happy reading!

Once upon a time

CRS4What if I told you that my first “serious job” was to compile and install free software for a bunch of systems and architectures? Crazy? Well, believe it or not, but that’s the truth. My first “serious job” was at the CRS4 Research Centre back in Sardinia, where I was initially hired as a consultant. My job was to help them renew the stock of free software they used in their work, compiling a number of packages for several architectures. At the time, they had IBM AIX v3 and 4, SGI IRIX v5 an6, Solaris v2.3 and 2.5, HP-UX v9 and 10, and keeping up with new versions of the software in such a diverse environment was just impossible for my friends in the Computers & Network team to do, together with the usual daily tasks.

Every day, I would take a train from my town to Cagliari, get into my office and compile software for all those platforms, the whole day. This lasted for some six months between 1997 and 1998, and marked the start of my career as a System Administrator.

Back then, compiling software from sources was the most common way to install it on systems. Sure, there were special cases where you got ready-to-use packages, but more often than not compiling was the best. The situation is quite different today, and unless you stumble in something exotic that you desperately need and is provided in source code only, there are easy ways to install the software on your computers. You use yum to install rpm packages on Red Hat Linux system and their derivatives, or apt and deb packages to install software on Debian, Ubuntu and other Debian derivatives. Right?

Well, not really. At the end of the 90s, we had a single, complex but pretty standardised way to install software on systems (the configure-make-make test-make install chant, with few variations here and there), but there were many cases where this approach wasn’t ideal. Today we have so many different package/software managers, each of which makes it  way easier to install a software compared to the late 90s, but they are too many now: if you can’t rely on just one of those systems to manage all of your software installations,  things get pretty messy quickly.

Fast forward from 1997 to 2020: today I am a Site Reliability Engineer at Riks TV in Oslo, Norway. I have experience. Having started my career where I started it, I should have been aware of the traps of software installation and how it’s not as easy to solve this problem as it looks at a first glance. Alas, I wasn’t aware at all, as the following story demonstrates.

Configuration management

If you are reading this section there is a chance that you don’t know what configuration management is. I will try to write a definition, hoping that is easy enough to understand for a newcomer: configuration management is a process that defines the correct configuration of a system and continuously checks if the system matches that definition, correcting any deviation from the desired state; such configuration includes not only the settings of the operating system and the applications running on it, but (for example) also what users must or must not be on the systems, of which groups they should or shouldn’t be members of, which processes or services should be allowed/prohibited to run on the system, and more.

Configuration management was the precursor of what we call Infrastructure as Code today. When cloud infrastructures were not as commonplace as today, configuration management systems would ensure the right configuration on all of a company’s servers across the globe. Today, with cloud infrastructures being more common and a lot of applications designed as microservices running in containers, possibly in Kubernetes, Configuration Management has been replaced with other tools in many places, but it still has its use wherever hardware servers are used at scale, or anyway where you have hardware that you can replace with the same ease as an AWS instance or a Docker container, and thus you have to manage the lifecycle of a long-lived system.

Personal configuration management

I always maintained that Configuration Management makes sense even at smaller scales. Someone says that Configuration Management makes sense only if the number of systems you are managing is over a certain threshold, as the complexity of maintaining a Configuration Management solution starts to pay off only when you are above that threshold. I agree, as long as that threshold is bigger than or equal to one. In fact, if you have, say, two laptops to manage, there is a convenience of making an improvement of some kind on one and have it reflected to the second, for example. But even if you have only one computer (say, a laptop), things can still go south: it can fall from your table, or you can spill your beer or coffee on it, or it can be stolen; if it happens, it’s good if it takes just a few hours, and not days, from when you buy a new laptop to when you are ready to resume your work. If the right configuration for your laptop is defined by a Configuration Management system, that is absolutely possible.

When I worked for Opera Software, I started doing my personal configuration management, too. I was using the same tools for both work and personal configurations: things that I designed at work, I used at home, and vice versa: things I designed for my personal configuration management were then applied to the CFEngine policies at work, so it was very easy to keep everything up to date.

When I joined Telenor Digital at the end of 2016, things changed. Personal Configuration Management was still useful and important to me, but I wasn’t using CFEngine on a daily basis any more, and my set-up started to grow a bit too complex to be maintained. I was able to keep it going to support Debian 9 on my computers, but I was still running on CFEngine 3.7 (the current LTS version is 3.15…) and I never got to update the policies for Debian 10 — in fact, I never updated my laptops to Debian 10. Until now.

New job, new setup

On March 2nd, 2020 I started as an SRE in Riks TV, and of course I got a new computer. I have recently written about the difficulties of installing Debian 10 on a Lenovo ThinkPad P1 Gen2. In addition, new tools and different software were required to do my job. I thought it was a good opportunity to restart my Personal Configuration Management from scratch. I knew it was a big endeavour, and I needed to choose carefully what to start with: something that was important for the configuration of the system and I could do in a reasonable amount of time.

I decided to go for software installation. How deep was the rabbit hole I was starting to dig, I would discover in the following weeks…

New foundations

My first attempt at Personal Configuration Management (PCM) was to use Dropbox as the distribution system of my policies. A git repository would live in a Dropbox folder, and checked out on clients. From there, it would be merged with Normation’s NCF policy framework and deployed where CFEngine could use it.

This was perfectly OK when I put together the system some time in 2015, but it wasn’t any more today for the reasons I explained above. At the time, I didn’t have access to a cloud service as I have today with AWS, and I could install a Dropbox client on as many computers as I liked. Today having a CFEngine server in the cloud for myself is not a crazy idea any more, while Dropbox set a limit of three clients for free subscription.

I was a bit torn in two, if I had to keep using NCF or not, and in the end I decided not to. But I decided to retain my hENC system for hierarchical classification and configuration, although with a data structure greatly simplified compared to the original setup.

And, finally, I would use an LTS version of CFEngine, and the latest one: 3.15. To deploy my policy hub on AWS I reused part of the work done with terraform some time ago.

With all these decisions made, it was time to start working on the software installation.

The easiest: install bundled software

Of course, the easiest software to install is the one already included in your Linux distribution: for this particular case there are no sources to define, no keys to download to verify the software… you just say somewhere what packages you want to install, and the configuration management will cooperate with the system to make your will come true. However, in the first incarnation of my PCM the lists of software were defined in hENC, which worked in general, but was not flexible enough (e.g., since information was stored in several independent data structures it was impractical to iterate over that information to install software efficiently, e.g. without having to duplicate code). To improve on flexibility, I decided to move that information from hENC and into JSON files. CFEngine can read JSON files directly into one of its data structures called data containers, that are easier to use iteratively.

Or maybe not. And it took days to understand how to overcome the limitations. But in the end I found a way. I have a JSON file that looks like this:

{
    "dev" : {
        "packages" : [
            "build-essential",
            "devscripts",
            "perl-doc",
            "meld",
            "git",
            "linux-headers-amd64",
            "aws-shell",
            "awscli",
            "pipenv"
        ],
        "trigger" : "install_dev_packages"
    },

    "editors" : {
        "packages" : [
            "emacs",
            "vim",
            "emacs-goodies-el",
            "elpa-markdown-mode",
            "yaml-mode"
        ],
        "trigger" : "install_editors_packages"
    },

…and so on. CFEngine will iterate over the software groups defined in the file (“dev”, “editors”…) and install the given list of packages if the trigger class is defined. And where would these trigger classes be defined, if in case? In hENC, where I can define them globally, or per individual node, or something in between. So far so good.

Bundled, non-default software

The solution above left some ground to cover though. If you read the article about the installation on the Lenovo you may remember that, in order to make the laptop work, I had to install from the Debian Backports both a kernel and the Nvidia drivers. Backports are special in two ways: first, you have to explicitly add the apt sources for them and second, you have to explicitly specify that, when installing package X, you want it to come from backports and not from other sources (because backports have a very low priority by default). This results in two deviations from the standard process that must be kept into account in the PCM policies: you have to add new sources, and you have to use different commands to install software.

But it’s not the only case where one must add new sources. In fact, by default, the software sources created during the installation of debian only provide the “main” branch of packages. Some software that I normally use come from the “contrib” or, alas, the “non-free” branches. Those sources must be added explicitly. So, there we go: a standardised way to add software sources was needed. New JSON file:

{
    "backports" : {
        "sources" : [ "deb https://deb.debian.org/debian ${inventory_lsb.codename}-backports main" ],
        "trigger" : "enable_debian_backports"
    },

    "contrib" : {
        "sources" : [
            "deb http://deb.debian.org/debian ${inventory_lsb.codename} contrib",
            "deb http://deb.debian.org/debian ${inventory_lsb.codename}-updates main",
            "deb http://security.debian.org/debian-security ${inventory_lsb.codename}/updates contrib"
        ],
        "trigger" : "enable_debian_contrib"
    },

    "nonfree" : {
        "sources" : [
            "deb http://deb.debian.org/debian ${inventory_lsb.codename} non-free",
            "deb http://deb.debian.org/debian ${inventory_lsb.codename}-updates non-free",
            "deb http://security.debian.org/debian-security ${inventory_lsb.codename}/updates non-free"
        ],
        "trigger" : "enable_debian_nonfree"
    }
}

Same technique here: CFEngine will iterate over the source IDs (e.g. “backports”) and, for each one of them, add a source file with the given sources if the corresponding trigger class is defined. And, of course, to install from backports I have written a specific CFEngine agent bundle.

Software from additional repositories

The two sections above cover, more or less, any software that is distributed by Debian somehow, and for which the public keys are already installed on the system. But what about fully external software (say, Vivaldi)? Well, more surprises coming!

Full-circle software

Some software does the full circle and come with sources, a downloadable key, and one or more packages to be installed from those sources. Examples are Vivaldi and virtualbox. For those, a JSON file with the following information is enough:

{
    "vivaldi" : {
        "sources" : [ "deb http://repo.vivaldi.com/stable/deb/ stable main" ],
        "key": "https://repo.vivaldi.com/archive/linux_signing_key.pub",
        "packages" : [ "vivaldi-stable" ],
        "trigger" : "install_vivaldi"
    },

    "virtualbox" : {
        "sources" : ["deb [arch=amd64] https://download.virtualbox.org/virtualbox/debian ${inventory_lsb.codename} contrib"],
        "key" : "https://www.virtualbox.org/download/oracle_vbox_2016.asc",
        "package" : ["virtualbox-6.1"],
        "trigger" : "install_vbox"
    },

…and so on. And yes, I have written agent bundles to download the keys and add them to the system, something that you don’t need when you install bundled software.

Stealth full-circle software

Some software is full circle, but you get to discover it only after you installed the software according to the instructions of the producer. This software is usually installed this way:

  • you download a deb package
  • you install the package, and the package installs the software, a source and a key

However, once the package is installed it’s relatively easy to understand where the key comes from, and it’s very easy to see what the source contains: once you have that information, you can add it to the same JSON file as the previous case and you are done with the automation. E.g., once you install Microsoft’s Visual Studio Code it becomes clear that, in order to have this software fully managed, you need to add this object in the JSON file:

    "vscode" : {
        "sources" : [ "deb [arch=amd64] http://packages.microsoft.com/repos/vscode stable main" ],
        "key" :  "https://packages.microsoft.com/keys/microsoft.asc",
        "packages" : [ "code" ],
        "trigger" : "install_vscode"
    }

Not too bad? Wait, there is worse stuff.

Bastard stealth half-circle software

Then, you have software like Skype. It’s just like the stealth full-circle software above, with a notable difference: the key is not downloadable from anywhere (or, at least, you cannot infer the location of the key from the installation): it is bundled with the Debian package and installed with the software. If you want to automate the installation of this kind of software, you need to extract the key and save it in a file along with the policies, and then distribute it to clients. Clients will then install it from a file instead of downloading it. You cannot use the same JSON file and agent bundle for this bastard kind of software: you have to implement yet another method, so that you read the information from a separate JSON file and add the key when appropriate:

{
    "skype-stable" : {
        "sources" : [ "deb [arch=amd64] https://repo.skype.com/deb stable main" ],
        "key" : "lib/local/apt-keys/skypeforlinux.gpg",
        "packages" : [ "skypeforlinux" ],
        "trigger" : "install_skype"
    }
}

Ready-to-use packages

Sometimes software is distributed only in Debian packages and outside of any APT repository: the only way to install it is to download a Debian package and install it. An example of this would be Rambox. Nothing transcendental here, but it’s yet another damn different case to keep into account and for which have separate CFEngine agent bundles.

Oh, by the way, when it comes specifically to rambox, it has an installation bug in Debian where the file /opt/Rambox/chrome-sandbox is created with the wrong permissions. Nothing that CFEngine can’t fix, and the fix is easy, but it’s also a special case of a special case which requires yet another specific bundle. If your head has just hit the desk, I understand: mine did, as well.

Binary-only software

Yes, there is yet another case: software that is distributed in the form of a self-contained binary. An example of this is aws-vault: you are supposed to download a file, place it in a directory in your path (e.g. /usr/local/bin), ensure it has the correct permissions and off you go. And, sure enough, I wrote a CFEngine agent bundle to deal with it.

It’s an easy case, but it’s not the only one. Enter terraform, for example: the binary is zipped, so you have to uncompress it before it can be installed in the same way as a ready-to-use binary. In other words, I had to instrument the policies to handle the subcases where the binaries are compressed with zip, or gzip or whatever, one subcase for each compression method. And yes, all this information is in yet another JSON file:

{
    "terraform" : {
        "url" : "https://releases.hashicorp.com/terraform/${henc.terraform_version}/terraform_${henc.terraform_version}_linux_amd64.zip",
        "version" : "${henc.terraform_version}",
        "install_dir" : "/usr/local/bin",
        "binary_name" : "terraform",
        "compressed" : "zip",
        "trigger" : "install_terraform"
    },

    "aws-vault" : {
        "url" : "https://github.com/99designs/aws-vault/releases/download/v${henc.awsvault_version}/aws-vault-linux-amd64",
        "version" : "${henc.awsvault_version}",
        "install_dir" : "/usr/local/bin",
        "binary_name" : "aws-vault",
        "compressed" : "no",
        "trigger" : "install_awsvault"
    }
}

…and a little bit of everything

And, finally, you have cases where the bastard-o-meter just goes beyond scale. Docker is a clear case of this. If you want to install a specific version of Docker on your system, together with Docker Compose, and you follow the instructions, you’ll have essentially three pieces:

  • you’ll have an additional source with an additional case, from which you are supposed to install containerd.io, flat;
  • from the same sources, you’ll install docker-ce-cli and docker-ce, at the specific version you want to use (and specific version means: yet another specific case to install this software and yet another agent bundle);
  • then, you’ll install docker-compose from a plain binary.

So you are mixing two of the things that you have just seen, and then adding some more. Wonderful…

Conclusion

This concludes on the different installation approaches that I had to implement in my policies, but it’s way far the end of the story. We didn’t deal with other approaches like snap, for example, or distributed as a docker container, or pip. Those would all need special considerations and specific policies to be installed. Not bad for a problem that was supposed to be trivial, isn’t it?

I am still writing my CFEngine policies and I am far from done yet, but with the software installation part finally done I am pretty confident that I can get to more fundamental tasks and get them done way quicker than this one.

The policies currently live in a private repository in github. I want to write some privacy- or security-related policies before deciding if I can switch it to public or not. Please bear with me

Until next time, enjoy!

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.