In our team at RiksTV, the company I joined in March 2020, we use Python. I had never used Python before and I’m working as hard as I can to fill the gap.
During the Christmas break I assigned myself a small coding challenge, both to test what I have learned so far and to avoid that new knowledge to be washed away. I decided to share that code, and will continue sharing as I keep learning and whenever I make something that could be useful to more people than just myself. Head to github if you are interested.
A few weeks ago I found myself in need of a a place where I could share public encryption keys with others for a side project of mine. As the adjective public implies, there is nothing secret about public keys: they can be shared in the open safely, so that was not a concern. The problem was to find a convenient way to do that. More precisely, I needed a place where I could share certain public keys with everyone, and where anyone could put their public keys to share them with me, and with me only.
In the end, I turned to AWS S3 as it is a natural place to look at when it comes to file storage and sharing. But it took a lot of trial and error before I was actually able to find an appropriate configuration for the bucket. I also put some automation with terraform into the mix, both because I prefer to automate things that I may have to do several times, and because it turned out that I’ll have to bring this inbox of mine up and down at need. The outcome is a terraform module that I have just published on github.
TL;DR: I put together a Perl script that does name/address resolution from the perspective of the OS instead of relying solely on the DNS like the dig or host commands. If this makes sense and sounds useful, just go check out my resolve-pl repository on github. If it doesn’t fully make sense, then read on.
Are you annoyed that there are no native Linux packages for the AWS CLI (deb, rpm…)? And, thus, no repositories? I am, a bit.
But it’s also true that the installation is not difficult at all, right? Well, yes, if you want to install it in locations different than the defaults (e.g. your user’s home directory) and on more than one machine you still have to do some work, but it’s not terrible, is it?
Then, one day, you find that one of the AWS CLI commands you need to use was added in a newer version than the one you are running, so you have to update the AWS CLI on all machines, and possibly rediscover the parameters you used during the initial installation. Are you happy with that?
If you find it useful, I am happy. And if you want to support more Linux distributions or more operating systems (MacOS should be fairly easy, I expect), just go ahead and throw me a pull request. Enjoy!
If you are a Linux user and you find yourself in need to connect to a remote Windows server desktop, I hereby recommend that you give Remmina a try.
At my new job I have to check into Windows servers at times (eh, I know, it’s a cruel world…) through the RDP protocol. We have some tooling available that, given the name of a VM, will look up from various sources all information necessary to connect to that machine and build a Remote Desktop Connection file (“.rdp”). The problem: Vinagre, the standard GNOME RDP client, doesn’t know what to do with that file, so I had to find another client.
Just a small bash snippet for those cases where, for example, a command returns AWS instance IDs but not the matching DNS names or an IP addresses. The function id2dns, that you can add to your .bashrc file, will do the translation for you. In order to use the function you will:
ensure you have the aws CLI installed and functional;
ensure you have jq command available;
ensure you have valid AWS credentials set, so that your aws CLI will work.
This is mostly a note to self. When I need an EC2 instance to run a quick test, it may be overly annoying to provision one through the web console, or it may feel a bit overkill to do that using large frameworks like terraform. Using the AWS command line is just fine, if you know what command to run with which parameters, and it pays off quickly if, to run your tests, you use the settings often (AMI, subnet, security groups…) or if during the same test session you need to scrap and rebuild test instances a few times. Here is an example on how to do so with the AWS command line client.
Say you have access to two separate AWS accounts, and say you have EC2 instances running in a certain region and availability zone, e.g eu-west-1a, in both accounts. Today I learned to my greatest surprise that, despite the same name, they may actually be two totally different locations. Intrigued? Read on!
I’ll tell you a personal story, hoping that it will encourage as many of you as possible to dust off some old computer you have in a storage of yours to help finding a cure against COVID-19 and, hopefully, many other diseases. If that sounds interesting, please read on.
I have published a small update to cf-keycrypt, so that it’s now easier to compile the tool on Debian systems and it’s compatible with CFEngine 3.15. You can find it here.
For those who don’t know the tool, I’ll try to explain what it is in a few words. The communication between CFEngine agents on clients and the CFEngine server process on a policy hub is encrypted. The key pairs used to encrypt/decrypt the communication are created on each node, usually at installation time or manually with a specific command. cf-keycrypt is a tool that takes advantage of those keys to encrypt and decrypt files, so that they are readable only on the nodes that are supposed to use them. The fact that the keys are created on the nodes themselves eliminates the need to distribute the keys securely.
cf-keycrypt was created years ago by Jon Henrik Bjørnstad, one of the founders of CFEngine (the company). The code has finally landed the CFEngine core sources as cf-secret, but it’s not part of the current stable releases. I had an hard time trying to compile it, but I made it with good help from the CFEngine help mailing list. I decided to give the help back to the community, publishing my updates and opening a pull request to the original code. Until it’s merged, if it ever will, you can find my fork on my github.