Five DNS client tools, and how to use them

Everything is a Freaking DNS problem“, as Kris Buytaert often puts it. Debugging any distributed system can be a daunting task in general, and DNS is no exception. But even debugging an internal DNS service, which won’t be as nearly as distributed as the global domain name service, may turn out to be an unpleasant experience: think Kubernetes and coredns, for example.

Debugging DNS-related problems in containers running in Kubernetes can be a challenge indeed, in that containers running in a cluster may be based on completely different Linux images, each one sporting a different DNS client, if any. In those cases, it’s better to have an idea on how to use whatever client you happen to find on those containers, or install one yourself. Fear not, I have prepared an outline, just for you!

nslookup, the oldies but goldies

nslookup is maybe the first generation of DNS query tools that comes from the BIND DNS server project. It can be used in both interactive and non-interactive mode. In the non-interactive mode you make a query directly on the command line, you get an answer, and the command exits:

$ nslookup www.google.com
Server:		1.1.1.1
Address:	1.1.1.1#53

Non-authoritative answer:
Name:	www.google.com
Address: 142.250.74.100
Name:	www.google.com
Address: 2a00:1450:400f:80c::2004

nslookup uses the name servers that are configured on your system by default. You can use a different one by specifying it on the command line as the second argument:

$ nslookup www.google.com 8.8.8.8
Server:		8.8.8.8
Address:	8.8.8.8#53

Non-authoritative answer:
Name:	www.google.com
Address: 216.58.211.4
Name:	www.google.com
Address: 2a00:1450:400f:801::2004

If you run nslookup without arguments, you enter interactive mode, in which you can run several queries and also tweak how the query is performed

$ nslookup
> www.google.com
Server:		1.1.1.1
Address:	1.1.1.1#53

Non-authoritative answer:
Name:	www.google.com
Address: 142.250.74.132
Name:	www.google.com
Address: 2a00:1450:400f:803::2004
> www.facebook.com
Server:		1.1.1.1
Address:	1.1.1.1#53

Non-authoritative answer:
www.facebook.com	canonical name = star-mini.c10r.facebook.com.
Name:	star-mini.c10r.facebook.com
Address: 31.13.72.36
Name:	star-mini.c10r.facebook.com
Address: 2a03:2880:f10a:83:face:b00c:0:25de
> set querytype=mx
> gmail.com
Server:		1.1.1.1
Address:	1.1.1.1#53

Non-authoritative answer:
gmail.com	mail exchanger = 10 alt1.gmail-smtp-in.l.google.com.
gmail.com	mail exchanger = 20 alt2.gmail-smtp-in.l.google.com.
gmail.com	mail exchanger = 40 alt4.gmail-smtp-in.l.google.com.
gmail.com	mail exchanger = 30 alt3.gmail-smtp-in.l.google.com.
gmail.com	mail exchanger = 5 gmail-smtp-in.l.google.com.

Authoritative answers can be found from:
> 

In the example above, we query the DNS server for the address of www.google.com and www.facebook.com. Then we switch the query type to MX (mail exchanger), and we check which servers handle email for the gmail.com domain.

This should be enough to get you going, see the nslookup man page for more info.

host, nslookup’s younger brother

host is the second generation of DNS query tools from the BIND project. Its basic usage is:

$ host www.google.com
www.google.com has address 216.58.207.228
www.google.com has IPv6 address 2a00:1450:400f:80c::2004

Like nslookup, you can specify a DNS server to resolve your query as the second argument of the command:

$ host www.google.com 8.8.8.8
Using domain server:
Name: 8.8.8.8
Address: 8.8.8.8#53
Aliases: 

www.google.com has address 142.250.74.100
www.google.com has IPv6 address 2a00:1450:400f:80b::2004

And you can query different types of records as well, like e.g. MX:

$ host -t mx gmail.com
gmail.com mail is handled by 10 alt1.gmail-smtp-in.l.google.com.
gmail.com mail is handled by 20 alt2.gmail-smtp-in.l.google.com.
gmail.com mail is handled by 40 alt4.gmail-smtp-in.l.google.com.
gmail.com mail is handled by 30 alt3.gmail-smtp-in.l.google.com.
gmail.com mail is handled by 5 gmail-smtp-in.l.google.com.

host has no interactive mode, but that doesn’t mean you can’t tweak your queries. In fact, a number of command line options are there to help you. See the host man page for more info.

dig, the swiss army knife

dig is the third generation tool for DNS queries from the BIND project. It’s very powerful in that it reports a lot of data about your queries and you can fine tune it in all possible ways. At the same time, it’s default format is very verbose, which makes it quite confusing at first.

Let’s query www.google.com once again, using dig:

$ dig www.google.com

; <<>> DiG 9.16.44-Debian <<>> www.google.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 9932
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
;; QUESTION SECTION:
;www.google.com.			IN	A

;; ANSWER SECTION:
www.google.com.		285	IN	A	142.250.74.100

;; Query time: 4 msec
;; SERVER: 1.1.1.1#53(1.1.1.1)
;; WHEN: Sun Sep 24 22:56:36 CEST 2023
;; MSG SIZE  rcvd: 59

Quite chatty as you can see. You can make it less chatty easily though:

$ dig +short www.google.com
142.250.74.68

Here you see that only the IPv4 address was reported, but we know from previous examples that www.google.com also has IPv6 addresses, so why aren’t they displayed?

By default, dig resolves names to addresses by querying A records, and addresses to names by querying PTR records. DNS names are associated to their IPv6 addresses in AAAA records, and that’s what you need to query in order to resolve those. The two command lines below are equivalent:

$ dig +short -t AAAA www.google.com
2a00:1450:400f:804::2004
$ dig +short www.google.com AAAA
2a00:1450:400f:804::2004

If you want to use a DNS server other than the default, set it as the first argument of the command, prefixed by @:

$ dig +short @8.8.8.8 www.google.com AAAA
2a00:1450:400f:803::2004

This is just a brief introduction, but I can’t just leave you to the man page for dig: it’s so large and complete that it may feel as daunting as the DNS problems you are trying to debug. In that case, have a look at Julia Evans’ comics about dig and how to read dig output.

Finally, remember that you can use a .digrc file to set your default options instead of specifying them all the time on the command line like I just did (although you may not do that when debugging a problem inside a container). Check the man page for info.

getent, back to basics

getent is probably the oldest tool to offer name resolution capabilities. I don’t have any proof to support my claim, but it’s actually the absence of any historical information from both the man page and the source code that makes me believe that it has been around forever.

Whatever the birthdate, getent is also a different type of beast compared to the three tools we have seen so far. In fact, while nslookup, host, and dig are specialised on DNS only, getent is a more general tool that can be used to query several system databases like, e.g., the password file:

$ getent passwd root
root:x:0:0:root:/root:/bin/bash

getent is also different in the way it does name resolution. In fact, getent leverages the C library directly, and resolves names according to the configuration in /etc/nsswitch.conf. Explaining the Name Service Switch functionality is definitely out of scope here; suffice it to say that, depending on how the functionality is configured, not only will getent return names resolved via DNS, but also names resolved through the hosts file or the .local names in our home network. You need to keep that in mind in case you are querying a name that is registered in both DNS and the hosts file, for example.

But enough talking! So, how does one resolve a name with getent?

$ getent hosts www.google.com
2a00:1450:400f:80c::2004 www.google.com

OK, this is only one address and an IPv6 one though. Any ways around that? Of course!

$ getent ahosts www.google.com
2a00:1450:400f:80a::2004 STREAM www.google.com
2a00:1450:400f:80a::2004 DGRAM  
2a00:1450:400f:80a::2004 RAW    
216.58.207.228  STREAM 
216.58.207.228  DGRAM  
216.58.207.228  RAW

A bit verbose, but you have both IPv4 and IPv6. What about if you only want one of the two?

$ getent ahostsv4 www.google.com
216.58.207.228  STREAM www.google.com
216.58.207.228  DGRAM  
216.58.207.228  RAW    
$ getent ahostsv6 www.google.com
2a00:1450:400f:80a::2004 STREAM www.google.com
2a00:1450:400f:80a::2004 DGRAM  
2a00:1450:400f:80a::2004 RAW

getent also allows for resolving more than one name with one single call:

$ getent hosts www.google.com www.facebook.com
2a00:1450:400f:80a::2004 www.google.com
2a03:2880:f10a:83:face:b00c:0:25de star-mini.c10r.facebook.com www.facebook.com

What if you want to query other DNS record types besides doing name resolutions, or use a different name server than the one that’s configured in the system? You can’t. getent is part of the C library tools and uses system calls to query information (e.g. gethostbyname or gethostbyaddr) and those calls don’t include the resolution of other record types.

getent is small and lightweight, so it may appear even in lightweight base images, unless their creators really went hard on the optimization. It’s worth to know the basics just in case it’s the only DNS query tool that you have at hand. See the man page for more information.

resolve, the hidden perl

resolve is a Perl script that I wrote when I didn’t know about getent. The functionality is the same in that it uses system calls under the hood to do name resolution, but I believe it provides a more consistent and complete output compared to getent. An example:

$ resolve www.google.com www.facebook.com
www.google.com ipv6 2a00:1450:400f:80c::2004
www.google.com ipv4 172.217.21.164
www.facebook.com alias star-mini.c10r.facebook.com
www.facebook.com ipv6 2a03:2880:f10a:83:face:b00c:0:25de
www.facebook.com ipv4 31.13.72.36

Just as getent hosts, resolve can resolve more than one name at a time. Unlike getent, it clearly marks IPv4 and IPv6 addresses, and it clearly reports about aliases/CNAMEs, too.

You can find more details about resolve and why I wrote it in the article Name/address resolution from the perspective of the OS in this same blog. You’ll find the code, installation instructions, and a description of the differences between resolve and getent in the GitHub repository.

If you come across a container that is so stripped down to not have any of the other tools, but it has Perl, you can as well give resolve a try. On the other hand, I don’t expect you to really come across such a case so often, so you may as well fall back to the last resort…

The last resort

If the container you are debugging in has no DNS tools and no Perl, your last resort is to install one of these tools yourself, if you know how to use that container’s distribution package management tools. If you don’t, then you need an article like this one, but for package managers. Shall we write one together? I volunteer for apt!

Name/address resolution from the perspective of the OS

TL;DR: I put together a Perl script that does name/address resolution from the perspective of the OS instead of relying solely on the DNS like the dig or host commands. If this makes sense and sounds useful, just go check out my resolve-pl repository on github. If it doesn’t fully make sense, then read on.

Continue reading

cf-deploy v4 released

After five years after the release of cf-deploy v3, I have just released cf-deploy v4. This version of cf-deploy fixes a number of shortcomings that made their way up to this point and that I wasn’t able to see until recently. It is now more flexible and easier to configure than it ever was. In particular, the documentation is way more comprehensive, covering installation, configuration and usage. The documentation also covers some of the internals, that will allow the hardcore user to fine tune the tool to better suit their needs.

You will find cf-deploy on github, as always. Enjoy!

Continue reading

The things I wish I knew before I started using Golang with JSON

A sign sold on EbayThis is not an article about how you can work with JSON in Go: you can easily learn that from the articles and web pages in the bibliography. Rather, this post is about the concepts that you must understand clearly before you set yourself for the task. Don’t sweat, it’s just two concepts two, and I’ve tried to explain them here.

In the last few weeks I have worked together with a colleague to write some automation with Golang and the Atlassian Crowd API. With several separate user databases (and, at the current state, no hope to unify them in a smart way) it would be very handy to take advantage of the APIs offered by, say, G Suite to fetch all the email addresses related to a user and use that information to automatically deactivate that user from all systems.

Coming from a Perl 5 background, I was hoping that decoding and encoding JSON in Go was as simple as it is in Perl.  But it turns out that it wasn’t, and it’s obvious if you think about it: as Perl 5 is weakly typed, decoding any typed data into an “agnostic” data structure must be simple. Encoding a weakly typed data structure into a typed format may be a bit trickier, but as long as you don’t have too many fancy data (i.e., in this context: strings made of only digits or non-obvious boolean representations) this will also work well. But with strongly typed Go and struct field names having side effects depending on upper-/lowercase, that’s a different story.

As it often happens in cases like this, you will not find all the information you need in a single place. This is my attempt to collect it all and hand it to you, so that you won’t have to waste as much time as I did. You will still have to read through stuff though.

Continue reading

Perl to go

I have been using Perl for more than 20 years now, seen Perl 4 bow out and Perl 5 come in and develop in that fantastic language that has helped me uncountable times in my professional life. During those years I’ve also considered learning another language, but I have been unable to take a stand for a long time.

And there came Go and the hype around Go, just like years ago there was a lot of hype around Java. But while whatever written in Java I came across was a big, heavy and slow memory eater, most of the tools I came across that were written in Go were actually good stuff — OK, still a bit bloated in size, but they actually worked. The opportunity came, and I finally gave Go a shot.

Continue reading

Rudimentary compliance report for CFEngine

In CFEngine community you don’t have a web GUI with compliance report. You can get them via EvolveThinking’s Delta Reporting, but if you can’t for any reason, you need to find another way.

A poor man’s compliance report at the bundle level can be extracted via the verbose output. This is how I’ve used it to ensure that a clean-up change in the policies didn’t alter the overall behavior:

cf-agent -Kv 2>&1 | perl -lne 'm{verbose: (/.+): Aggregate compliance .+ = (\d+\.\d%)} && print "$1 ($2)"'

These are the first ten lines of output on my workstation:

bronto@brabham:~$ sudo cf-agent -Kv 2>&1 | perl -lne 'm{verbose: (/.+): Aggregate compliance .+ = (\d+\.\d%)} && print "$1 ($2)"' | head -n 10
/default/banner (100.0%)
/default/inventory_control (100.0%)
/default/inventory_autorun/methods/'proc'/default/cfe_autorun_inventory_proc (100.0%)
/default/inventory_autorun/methods/'fstab'/default/cfe_autorun_inventory_fstab (100.0%)
/default/inventory_autorun/methods/'mtab'/default/cfe_autorun_inventory_mtab (100.0%)
/default/inventory_autorun/methods/'dmidecode'/default/cfe_autorun_inventory_dmidecode (100.0%)
/default/inventory_autorun (100.0%)
/default/inventory_linux (100.0%)
/default/inventory_lsb (100.0%)
/default/services_autorun (100.0%)

Not much, but better than nothing and a starting point anyway. There is much more information in the verbose log that you can extract with something slightly more elaborated than this one-liner. Happy data mining, enjoy!

How we shaved the poodle

CFEngineAgentIn this post I’ll describe how we used CFEngine to apply fixes to apache and nginx to defuse the infamous poodle bug. The post is a bit rushed, in the hope it may still be useful to someone. The policies use bundles and bodies from either the standard library or from our own. The libraries are not shown here but the names speak for themselves… hopefully 🙂

As you’ll probably know, the “trick” on the server side is not to allow secure (erm…) connections to use anything older than TLSv1. In order to do that, we decided to

  • deploy a conf.d snippet to set the appropriate protocol versions as a default;
  • disable the same directive in existing configuration files to avoid weaker directives take priority;
  • restart the server if/when the configuration gets fixed.

Continue reading

New home for my code, new release

Perl (onion)During the past years I’ve published a few Perl modules of mine to CPAN. Nothing big, nothing special, just some small, simple modules that I published in the hope that they would be useful to more people than just me. That code lived, or rather slept, in my hard disk and was not shared anywhere than in CPAN.

At the end of May, a bug was opened against the Net::LDAP::Express module and I decided it was time to bring that code to year 2014. Now, and since a few days ago, you can find the code of all my modules in github. With the code shared on github I was able to share a fix, have it tested by the person who submitted the bug, and confirm the bug was solved. Since one hour ago, the bugfix release 0.12 of Net::LDAP::Express is available on CPAN (on metaCPAN only for now, will hit all the archives in the next few hours).

You are welcome to clone the code from github, fork, branch, open pull requests… Just share the code, make it better, help people, and don’t forget to have fun in the process!

The classification problem: challenges and solutions

Update March 1st, 2015: the latest version of the code for hENC is now on github

It’s been about a month since I came back from FOSDEM and cfgmgmtcamp, a month where I gradually recovered from the the backlog both in the office and at home. It’s been a wonderful experience, especially at cfgmgmtcamp, and I really want to thank all those that helped make it special — more details at the end of this article.

But promise is debt (no pun intended with promise theory here), and I promised to write a long blog post with some (or all) the details from my talks. It’s time to keep that promise. So, without any further ado…

Continue reading