Invalid signatures from the Dropbox repository

When updating the packages on my Debian I get this error every now and then, and it’s really annoying:

W: GPG error: http://linux.dropbox.com/debian bookworm Release: The following signatures were invalid: BADSIG FC918B335044912E Dropbox Automatic Signing Key <linux@dropbox.com>
E: The repository 'http://linux.dropbox.com/debian bookworm Release' is not signed.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.

An internet search about the problem will throw a lot of different solutions at you. Some of them are quite heavy-handed. Some of them just don’t work. Most of them don’t really explain what you are doing when you run the commands you run. I checked a bunch of them and found my own solution.

My understanding of the problem is that packages lists from the Dropbox repository get corrupted for some reason, and this happens way more often with that repo than with any other I am using. When that happens, there may be something left in /var/lib/apt/partial, and it’s better to download the package information for that repository from scratch.

As root (e.g. after running sudo -s or sudo -i), run

find /var/lib/apt/lists -type f -name \*dropbox\* -print | xargs rm

This command will find all files (-type f) whose name contains the string “dropbox” (-name \*dropbox\*) under the directory /var/lib/apt/lists and its subdirectories, and then pass them as arguments to the rm command (xargs rm), hence deleting them.

When that is done, run apt update again and, hopefully, the error will be gone (well, unless the package information gets corrupted once again, that is…).

HTH!

Welcome bookworm! And how to continue running apt-get update

Debian 12 “bookworm” was officially released two days ago, yay!

And just like me, this morning your attempt to update the apt package cache may have been met by an odd notification similar to this one:

E: Repository 'http://deb.debian.org/debian testing InRelease' changed its 'Codename' value from 'bookworm' to 'trixie'
N: This must be accepted explicitly before updates for this repository can be applied. See apt-secure(8) manpage for details.

Why it’s happening

I have this source configured in apt:

deb-src http://deb.debian.org/debian/ testing main contrib non-free

The source refers to the distribution “testing”. The codename for testing is the same of the next Debian release. Before the release of Debian 12 it was “bookworm”. Now that bookworm is released, that codename switched to “trixie”. In my particular case, this is more or less harmless, as this source is not going to trigger the installation of any package. But if I was using “testing” or “stable” in my apt sources, that would make a difference: I may unintentionally install packages from Debian 12 on my Debian 11 and make a mess of my system.

The error and the notification are there to warn you that there was a codename change, and that you should consider if this is expected and you actually want to continue, or if you’d rather lock your sources to the current codename instead (that would be “bullseye” in Debian 11’s case).

What to do

Lock your package sources to the correct codename. E.g. if you are running Debian 11 and you have “stable” in your apt sources for the official Debian repositories, replace “stable” with “bullseye”. Note that for third party repos this may be different, check with the vendor for instructions.

If, like in my case, the change is harmful, you need to let apt know that you approve the change. That’s what we’ll see below in detail.

Accepting the codename change

The notification points to apt-secure. If you are like me, the next command you ran was man apt-secure. That helped finding more about the reason why this was happening, but not with the solution, alas:

INFORMATION CHANGES
       A Release file contains beside the checksums for the files in
       the repository also general information about the repository
       like the origin, codename or version number of the release.

       This information is shown in various places so a repository
       owner should always ensure correctness. Further more user
       configuration like apt_preferences(5) can depend and make use
       of this information. Since version 1.5 the user must therefore
       explicitly confirm changes to signal that the user is
       sufficiently prepared e.g. for the new major release of the
       distribution shipped in the repository (as e.g. indicated by
       the codename).

This is nice. Except that it doesn’t mention how one is supposed to explicitly confirm changes.

Some more digging and the man page of apt-get provided the solution:

       --allow-releaseinfo-change
           Allow the update command to continue downloading data from
           a repository which changed its information of the release
           contained in the repository indicating e.g a new major
           release. APT will fail at the update command for such
           repositories until the change is confirmed to ensure the
           user is prepared for the change. See also apt-secure(8) for
           details on the concept and configuration.

           Specialist options (--allow-releaseinfo-change-field) exist
           to allow changes only for certain fields like origin,
           label, codename, suite, version and defaultpin. See also
           apt_preferences(5). Configuration Item:
           Acquire::AllowReleaseInfoChange.

Running apt-get update –allow-releaseinfo-change returned the notification part again (the message prefixed with “N:“) but not the error (“E:“). Subsequent runs of apt/apt-get ran as usual. Problem solved 🙂

apt-key is deprecated, part 2

In my first article about the deprecation of apt-key I illustrated a few ways of adding APT repository keys to your system without using the apt-key command. A good follow-up discussion to that article started on twitter (thanks to Petru Ratiu). The topics we discussed were: the use of the signed-by clause and if it really helps increasing security; the use of package pinning to avoid third-party packages taking over official packages; and the pollution of system directories.

In this post we dig a bit deeper into these topics and how they help, or don’t help, making your system more secure. A TL;DR for the impatient is included at the end of each section.

Continue reading

apt-key is deprecated, now what?

It’s only a few weeks since I upgraded one of my systems from Debian 10 to Debian 11. In fact, I use to apply a “Debian distribution quarantine”: when a new major version of the distribution is out, I usually wait until a “.1” or “.2” minor version before installing it, as I don’t have enough time to debug problems that may have escaped Debian’s QA process at the very first release.

One of the first things that catch one’s attention when I ran the apt-key command in Debian 11 (e.g. a simple apt-key list) was a warning:

Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8))

“Deprecated” usually means that a certain functionality will be eventually removed from the system. In this case, Ubuntu users will be hit already in 2022 with the release of 22.10 in October as the command will be available last in the next LTS (22.04) to be released in April. Debian users will have more time, as the command won’t available in the next major release of Debian (supposedly Debian 12, that may be a couple of years away). This is written in clear letters in the man page:

apt-key(8) will last be available in Debian 11 and Ubuntu 22.04.

So, what are you supposed to do now in order to manage the keys of third party APT repositories?

Continue reading

Automating installation/updates of the AWS CLI on Linux

Are you annoyed that there are no native Linux packages for the AWS CLI (deb, rpm…)? And, thus, no repositories? I am, a bit.

But it’s also true that the installation is not difficult at all, right? Well, yes, if you want to install it in locations different than the defaults (e.g. your user’s home directory) and on more than one machine you still have to do some work, but it’s not terrible, is it?

Then, one day, you find that one of the AWS CLI commands you need to use was added in a newer version than the one you are running, so you have to update the AWS CLI on all machines, and possibly rediscover the parameters you used during the initial installation. Are you happy with that?

I am not, and I decided to do something to automate the process: a Makefile, the simplest form of automation you can have on UNIX systems. Here you go: aws-cli-manager on github.

If you find it useful, I am happy. And if you want to support more Linux distributions or more operating systems (MacOS should be fairly easy, I expect), just go ahead and throw me a pull request. Enjoy!

An update to cf-keycrypt

I have published a small update to cf-keycrypt, so that it’s now easier to compile the tool on Debian systems and it’s compatible with CFEngine 3.15. You can find it here.

For those who don’t know the tool, I’ll try to explain what it is in a few words. The communication between CFEngine agents on clients and the CFEngine server process on a policy hub is encrypted. The key pairs used to encrypt/decrypt the communication are created on each node, usually at installation time or manually with a specific command. cf-keycrypt is a tool that takes advantage of those keys to encrypt and decrypt files, so that they are readable only on the nodes that are supposed to use them. The fact that the keys are created on the nodes themselves eliminates the need to distribute the keys securely.

cf-keycrypt was created years ago by Jon Henrik Bjørnstad, one of the founders of CFEngine (the company). The code has finally landed the CFEngine core sources as cf-secret, but it’s not part of the current stable releases. I had an hard time trying to compile it, but I made it with good help from the CFEngine help mailing list. I decided to give the help back to the community, publishing my updates and opening a pull request to the original code.  Until it’s merged, if it ever will, you can find my fork on my github.

How to boot into a non-standard runlevel/target to rescue a Linux system

Recently, while testing a configuration of Linux on a Lenovo laptop, I messed up. I had rebooted the laptop and there were some leftovers around from an attempted installation of the proprietary Nvidia driver. The system booted fine and was functional, but those leftovers where enough to make the screen go blank. The fix is easy, if you can enter the system in some other way: log in and remove anything related to the Nvidia driver. But unfortunately the only way to log in was from the console, so I was “de facto” locked out.

The first attempt to get out of the mud was to force a reboot of the system and in rescue mode. The system booted well, but after I typed the root password the  boot process went a bit too far, loaded the infamous leftovers of the driver and here we go again, with a blank screen.

Continue reading

Down the rabbit hole: installing software

Preface

This article is about using configuration management to install software on your own computers (e.g. your laptops, or the computers used by your family and relatives) and how the complexity of this task is easy to overlook, no matter if you are a newbie or an expert.

If you already know about configuration management and how it makes sense to use it at a small scale like, again, your own computers or your family’s, you can just skip at the section “New job, new setup”.

If you already know about configuration management and you are asking yourself why it should make sense to use it at a small scale, I suggest that you start a section earlier, at Personal configuration management”.

If you are new to configuration management, or you wonder what could be difficult in installing software on a set of systems, I suggest that you read the whole article.

In any case, happy reading!

Continue reading

Installation of Debian GNU/Linux 10 “Buster” on a Lenovo ThinkPad P1 Gen2

ThinkPadP1Gen2Having recently started to work for Riks TV, I got a new laptop to install with my favourite Linux distribution: Debian. The laptop is a Lenovo ThinkPad P1 Gen2. It’s a very nice laptop, quite powerful and fast, with a large screen and way lighter than the Lenovos I have owned before through my previous employers (Opera Software and Telenor Digital).

That’s all great, but on the other hand my previous story with Lenovo laptops has never been problem-free, and I was sure this one was no exception. Alas, I was right. So I decided to write a few notes about the installation, for myself and for anyone who wants to install Debian on this laptop. These won’t be detailed, walk-through installation instructions, but more of a high-level checklists.

Continue reading

Exploring Docker overlay networks

Docker In the past months I have made several attempts to explore Docker overlay networks, but there were a few pieces to set up before I could really experiment and… well, let’s say that I have probably approached the problem the wrong way and wasted some time along the way. Not again. I have set aside some time and worked agile enough to do the whole job, from start to finish. Nowadays there is little point in creating overlay networks by hand, except that it’s still a good learning experience. And a learning experience with Docker and networking was exactly what I was after.

When I started exploring multi-host Docker networks, Docker was quite different than it is now. In particular, Docker Swarm didn’t exist yet, and there was a certain amount of manual work required in order to create an overlay network, so that containers located in different hosts can communicate.

Before Swarm, in order to set up an overlay network one needed to:

  • have at least two docker hosts to establish an overlay network;
  • have a supported key/value store available for the docker hosts to sync information;
  • configure the docker hosts to use the key/value store;
  • create an overlay network on one of the docker host; if everything worked well, the network will “propagate” to the other docker hosts that had registered with the key/value store;
  • create named containers on different hosts; then try and ping each other using the names: if everything was done correctly, you would be able to ping the containers through the overlay network.

This looks like simple high-level checklist. I’ll now describe the actual steps needed to get this working, leaving the details of my failuers to the last section of this post.

Continue reading