In my first article about the deprecation of apt-key I illustrated a few ways of adding APT repository keys to your system without using the apt-key command. A good follow-up discussion to that article started on twitter (thanks to Petru Ratiu). The topics we discussed were: the use of the
signed-by clause and if it really helps increasing security; the use of package pinning to avoid third-party packages taking over official packages; and the pollution of system directories.
In this post we dig a bit deeper into these topics and how they help, or don’t help, making your system more secure. A TL;DR for the impatient is included at the end of each section.
It’s only a few weeks since I upgraded one of my systems from Debian 10 to Debian 11. In fact, I use to apply a “Debian distribution quarantine”: when a new major version of the distribution is out, I usually wait until a “.1” or “.2” minor version before installing it, as I don’t have enough time to debug problems that may have escaped Debian’s QA process at the very first release.
One of the first things that catch one’s attention when I ran the
apt-key command in Debian 11 (e.g. a simple
apt-key list) was a warning:
Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8))
“Deprecated” usually means that a certain functionality will be eventually removed from the system. In this case, Ubuntu users will be hit already in 2022 with the release of 22.10 in October as the command will be available last in the next LTS (22.04) to be released in April. Debian users will have more time, as the command won’t available in the next major release of Debian (supposedly Debian 12, that may be a couple of years away). This is written in clear letters in the man page:
apt-key(8) will last be available in Debian 11 and Ubuntu 22.04.
So, what are you supposed to do now in order to manage the keys of third party APT repositories?
Another post in the “note to myself” style.
For anyone who does bash scripting, the command read is a well known tool. A usual task that we use read for it is to process the output of another command in a while loop, line by line, picking up a few fields and doing something with them. A stupid example:
sysctl -a 2> /dev/null | grep = | while read PARM DUMMY VALUE
echo "Value for $PARM is $VALUE"
That is: we read the output of sysctl, line by line, selecting only the lines that contain a = sign, then read the name of the setting and its value in PARM and VALUE respectively, and do something with those values. So far so good.
Based on what we have just seen, it’s easy to expect that this:
echo foobar 42 | read PARM VALUE
echo "Value for $PARM is $VALUE"
would print “Value for foobar is 42“. But it doesn’t:
So, where did those values go? Did read work at all? In hindsight I can tell you: yes, it worked, but those values have disappeared as soon as read was done with them. To both parse them and use them you have to run both read and the commands using the variables in the same subshell. This works:
echo foobar 42 | ( read PARM VALUE ; echo "Value for $PARM is $VALUE" )
echo foobar 42 | (
read PARM VALUE
echo "Value for $PARM is $VALUE"
This will print “Value for foobar is 42”, as expected.
Commands like the AWS CLI may return a list of values all in one line, where each item in the list is separated by the nearby items with spaces. Using a plain
read command doesn’t really work:
read will read all the values in one go into the variable. You need to change the delimiter that
read uses to split the input. No need to pipe the output through Perl or other tools,
read got you covered with the
In this example I get the list of the ARNs of all target groups in an AWS account, and then iterate over those ARNs to list all the instances in each target group. The ouput will also be saved into a file through the
aws elbv2 describe-target-groups \
--query 'TargetGroups.TargetGroupArn' \
--output text | \
while read -d ' ' ARN ; do \
echo -n "$ARN: " ; \
aws elbv2 describe-target-health \
--target-group-arn "$ARN" \
--query 'TargetHealthDescriptions.Target.Id' \
--output text ; sleep 1 ; \
done | \
The ouput of this one liner will be in the format:
ARN: instance_ID [instance_ID...]
Things to notice:
- the AWS CLI’s
describe-target-groups command will list all target groups’ ARNs thanks to the
--query option and list as many as possible on single lines, according to the shell’s output buffer capacity; the ouput is piped through a
- the while loop uses
read -d ' ' to split each line at spaces and save each item in the
$ARN variable, one per cycle;
echo command prints the value of
$ARN followed by a colon, a space, but will not output a newline sequence due to the
- the AWS CLI’s
describe-target-health command will list all target IDs thanks to the
--query option and print them out in a single line; it will also provide a newline sequence, so that the next loop will start on a new line;
sleep 1 command slows down the loop, so that we don’t hammer the API to the point that they will rate limit us;
- finally, the
tee command will duplicate the output of the while loop to both the standard output and the file
TL;DR: I put together a Perl script that does name/address resolution from the perspective of the OS instead of relying solely on the DNS like the dig or host commands. If this makes sense and sounds useful, just go check out my resolve-pl repository on github. If it doesn’t fully make sense, then read on.
Are you annoyed that there are no native Linux packages for the AWS CLI (deb, rpm…)? And, thus, no repositories? I am, a bit.
But it’s also true that the installation is not difficult at all, right? Well, yes, if you want to install it in locations different than the defaults (e.g. your user’s home directory) and on more than one machine you still have to do some work, but it’s not terrible, is it?
Then, one day, you find that one of the AWS CLI commands you need to use was added in a newer version than the one you are running, so you have to update the AWS CLI on all machines, and possibly rediscover the parameters you used during the initial installation. Are you happy with that?
I am not, and I decided to do something to automate the process: a Makefile, the simplest form of automation you can have on UNIX systems. Here you go: aws-cli-manager on github.
If you find it useful, I am happy. And if you want to support more Linux distributions or more operating systems (MacOS should be fairly easy, I expect), just go ahead and throw me a pull request. Enjoy!
If you are a Linux user and you find yourself in need to connect to a remote Windows server desktop, I hereby recommend that you give Remmina a try.
At my new job I have to check into Windows servers at times (eh, I know, it’s a cruel world…) through the RDP protocol. We have some tooling available that, given the name of a VM, will look up from various sources all information necessary to connect to that machine and build a Remote Desktop Connection file (“.rdp”). The problem: Vinagre, the standard GNOME RDP client, doesn’t know what to do with that file, so I had to find another client.
Just a small bash snippet for those cases where, for example, a command returns AWS instance IDs but not the matching DNS names or an IP addresses. The function id2dns, that you can add to your .bashrc file, will do the translation for you. In order to use the function you will:
- ensure you have the aws CLI installed and functional;
ensure you have jq command available;
- ensure you have valid AWS credentials set, so that your aws CLI will work.
Update 2020-08-14: jq not needed any more
I have published a small update to cf-keycrypt, so that it’s now easier to compile the tool on Debian systems and it’s compatible with CFEngine 3.15. You can find it here.
For those who don’t know the tool, I’ll try to explain what it is in a few words. The communication between CFEngine agents on clients and the CFEngine server process on a policy hub is encrypted. The key pairs used to encrypt/decrypt the communication are created on each node, usually at installation time or manually with a specific command. cf-keycrypt is a tool that takes advantage of those keys to encrypt and decrypt files, so that they are readable only on the nodes that are supposed to use them. The fact that the keys are created on the nodes themselves eliminates the need to distribute the keys securely.
cf-keycrypt was created years ago by Jon Henrik Bjørnstad, one of the founders of CFEngine (the company). The code has finally landed the CFEngine core sources as cf-secret, but it’s not part of the current stable releases. I had an hard time trying to compile it, but I made it with good help from the CFEngine help mailing list. I decided to give the help back to the community, publishing my updates and opening a pull request to the original code. Until it’s merged, if it ever will, you can find my fork on my github.
Recently, while testing a configuration of Linux on a Lenovo laptop, I messed up. I had rebooted the laptop and there were some leftovers around from an attempted installation of the proprietary Nvidia driver. The system booted fine and was functional, but those leftovers where enough to make the screen go blank. The fix is easy, if you can enter the system in some other way: log in and remove anything related to the Nvidia driver. But unfortunately the only way to log in was from the console, so I was “de facto” locked out.
The first attempt to get out of the mud was to force a reboot of the system and in rescue mode. The system booted well, but after I typed the root password the boot process went a bit too far, loaded the infamous leftovers of the driver and here we go again, with a blank screen.