apt-key is deprecated, part 2

In my first article about the deprecation of apt-key I illustrated a few ways of adding APT repository keys to your system without using the apt-key command. A good follow-up discussion to that article started on twitter (thanks to Petru Ratiu). The topics we discussed were: the use of the signed-by clause and if it really helps increasing security; the use of package pinning to avoid third-party packages taking over official packages; and the pollution of system directories.

In this post we dig a bit deeper into these topics and how they help, or don’t help, making your system more secure. A TL;DR for the impatient is included at the end of each section.

Continue reading
Advertisement

apt-key is deprecated, now what?

It’s only a few weeks since I upgraded one of my systems from Debian 10 to Debian 11. In fact, I use to apply a “Debian distribution quarantine”: when a new major version of the distribution is out, I usually wait until a “.1” or “.2” minor version before installing it, as I don’t have enough time to debug problems that may have escaped Debian’s QA process at the very first release.

One of the first things that catch one’s attention when I ran the apt-key command in Debian 11 (e.g. a simple apt-key list) was a warning:

Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8))

“Deprecated” usually means that a certain functionality will be eventually removed from the system. In this case, Ubuntu users will be hit already in 2022 with the release of 22.10 in October as the command will be available last in the next LTS (22.04) to be released in April. Debian users will have more time, as the command won’t available in the next major release of Debian (supposedly Debian 12, that may be a couple of years away). This is written in clear letters in the man page:

apt-key(8) will last be available in Debian 11 and Ubuntu 22.04.

So, what are you supposed to do now in order to manage the keys of third party APT repositories?

Continue reading

Automating installation/updates of the AWS CLI on Linux

Are you annoyed that there are no native Linux packages for the AWS CLI (deb, rpm…)? And, thus, no repositories? I am, a bit.

But it’s also true that the installation is not difficult at all, right? Well, yes, if you want to install it in locations different than the defaults (e.g. your user’s home directory) and on more than one machine you still have to do some work, but it’s not terrible, is it?

Then, one day, you find that one of the AWS CLI commands you need to use was added in a newer version than the one you are running, so you have to update the AWS CLI on all machines, and possibly rediscover the parameters you used during the initial installation. Are you happy with that?

I am not, and I decided to do something to automate the process: a Makefile, the simplest form of automation you can have on UNIX systems. Here you go: aws-cli-manager on github.

If you find it useful, I am happy. And if you want to support more Linux distributions or more operating systems (MacOS should be fairly easy, I expect), just go ahead and throw me a pull request. Enjoy!

An update to cf-keycrypt

I have published a small update to cf-keycrypt, so that it’s now easier to compile the tool on Debian systems and it’s compatible with CFEngine 3.15. You can find it here.

For those who don’t know the tool, I’ll try to explain what it is in a few words. The communication between CFEngine agents on clients and the CFEngine server process on a policy hub is encrypted. The key pairs used to encrypt/decrypt the communication are created on each node, usually at installation time or manually with a specific command. cf-keycrypt is a tool that takes advantage of those keys to encrypt and decrypt files, so that they are readable only on the nodes that are supposed to use them. The fact that the keys are created on the nodes themselves eliminates the need to distribute the keys securely.

cf-keycrypt was created years ago by Jon Henrik Bjørnstad, one of the founders of CFEngine (the company). The code has finally landed the CFEngine core sources as cf-secret, but it’s not part of the current stable releases. I had an hard time trying to compile it, but I made it with good help from the CFEngine help mailing list. I decided to give the help back to the community, publishing my updates and opening a pull request to the original code.  Until it’s merged, if it ever will, you can find my fork on my github.

How to boot into a non-standard runlevel/target to rescue a Linux system

Recently, while testing a configuration of Linux on a Lenovo laptop, I messed up. I had rebooted the laptop and there were some leftovers around from an attempted installation of the proprietary Nvidia driver. The system booted fine and was functional, but those leftovers where enough to make the screen go blank. The fix is easy, if you can enter the system in some other way: log in and remove anything related to the Nvidia driver. But unfortunately the only way to log in was from the console, so I was “de facto” locked out.

The first attempt to get out of the mud was to force a reboot of the system and in rescue mode. The system booted well, but after I typed the root password the  boot process went a bit too far, loaded the infamous leftovers of the driver and here we go again, with a blank screen.

Continue reading

Down the rabbit hole: installing software

Preface

This article is about using configuration management to install software on your own computers (e.g. your laptops, or the computers used by your family and relatives) and how the complexity of this task is easy to overlook, no matter if you are a newbie or an expert.

If you already know about configuration management and how it makes sense to use it at a small scale like, again, your own computers or your family’s, you can just skip at the section “New job, new setup”.

If you already know about configuration management and you are asking yourself why it should make sense to use it at a small scale, I suggest that you start a section earlier, at Personal configuration management”.

If you are new to configuration management, or you wonder what could be difficult in installing software on a set of systems, I suggest that you read the whole article.

In any case, happy reading!

Continue reading

Installation of Debian GNU/Linux 10 “Buster” on a Lenovo ThinkPad P1 Gen2

ThinkPadP1Gen2Having recently started to work for Riks TV, I got a new laptop to install with my favourite Linux distribution: Debian. The laptop is a Lenovo ThinkPad P1 Gen2. It’s a very nice laptop, quite powerful and fast, with a large screen and way lighter than the Lenovos I have owned before through my previous employers (Opera Software and Telenor Digital).

That’s all great, but on the other hand my previous story with Lenovo laptops has never been problem-free, and I was sure this one was no exception. Alas, I was right. So I decided to write a few notes about the installation, for myself and for anyone who wants to install Debian on this laptop. These won’t be detailed, walk-through installation instructions, but more of a high-level checklists.

Continue reading

Exploring Docker overlay networks

Docker In the past months I have made several attempts to explore Docker overlay networks, but there were a few pieces to set up before I could really experiment and… well, let’s say that I have probably approached the problem the wrong way and wasted some time along the way. Not again. I have set aside some time and worked agile enough to do the whole job, from start to finish. Nowadays there is little point in creating overlay networks by hand, except that it’s still a good learning experience. And a learning experience with Docker and networking was exactly what I was after.

When I started exploring multi-host Docker networks, Docker was quite different than it is now. In particular, Docker Swarm didn’t exist yet, and there was a certain amount of manual work required in order to create an overlay network, so that containers located in different hosts can communicate.

Before Swarm, in order to set up an overlay network one needed to:

  • have at least two docker hosts to establish an overlay network;
  • have a supported key/value store available for the docker hosts to sync information;
  • configure the docker hosts to use the key/value store;
  • create an overlay network on one of the docker host; if everything worked well, the network will “propagate” to the other docker hosts that had registered with the key/value store;
  • create named containers on different hosts; then try and ping each other using the names: if everything was done correctly, you would be able to ping the containers through the overlay network.

This looks like simple high-level checklist. I’ll now describe the actual steps needed to get this working, leaving the details of my failuers to the last section of this post.

Continue reading

A quick guide to encrypting an external drive

luks-logoI am guilty for not having considered encrypting my hard drives for too long, I confess. As soon as I joined Telenor Digital (or, actually, early in the process but a bit too late…) I was commanded to encrypt my data and I couldn’t delay any more. To my utter surprise, the process was surprisingly simple in my Debian jessie! Here is a short checklist for your convenience.

Continue reading

How I configure a docker host with CFEngine

DockerAfter some lengthy busy times I’ve been able to restart my work on Docker. Last time I played with some containers to create a Consul cluster using three containers running on the same docker host — something you will never want to do in production.

And the reason why I was playing with a Consul cluster on docker was that you need a key/value store to play with overlay networks in Docker, and Consul is one of the supported stores. Besides, Consul is another technology I wanted to play with since the first minute I’ve known it.

To run an overlay network you need more than one Docker host otherwise it’s pretty pointless. That suggested me that it was time to automate the installation of a Docker host, so that I could put together a test lab quickly and also maintain it. And, as always, CFEngine was my friend. The following policy will not work out of the box for you since it uses a number of libraries of mine, but I’m sure you’ll get the idea.

Continue reading