The conventional Domain Name System services to run

What services one runs

Conventionally in a modern scenario one needs to run three, possibly four, of the djbwares toolset's Domain Name System services:

This differs from Daniel J. Bernstein's original djbdns, which was aimed at the world of 1999 and 2000. Several norms now, were not norms then. Indeed, running a private root content DNS server, now a mainstream thing, was pioneered by djbdns and the BIND people were still hemming and hawing about it, despite it having been successfully practiced in the djbdns world for all of that time, two decades later before finally seeing the light. There are now many reasons for running walldns, where that was a very specialized service at the turn of the 21st century.

Exactly how one runs dnscache, tinydns, axfrdns, and walldns as services is beyond the scope of this Guide. The various -conf utilities of the original djbdns are long-since obsolete, and service definitions/bundles for specific service management systems (from daemontools through runit to nosh) have become available elsewhere over the years.

How these services fit together

Local proxy DNS service

As mentioned, the instance dnscache provides resolving proxy DNS service to the DNS client libraries built into applications. It is listed in the DNSCACHEIP environment variable as documented in the djbdns-client manual, or in the /etc/resolv.conf file.

It should be reachable at best only to machines on the same LAN, and the normal configuration is for each machine to run its own instance of the service listening on the 127.0.0.1 (or the 0:0:0:0:0:0:0:1) IP address only reachable from the machine itself. This IP address is the hardwired out-of-the-box fallback default in most DNS client libraries, sans any configuration being done at all. The antics of some multinational corporations (e.g. CloudFlare, Quad9, and Google) notwithstanding, proxy DNS service is not something to be providing to Internet at large.

dnscache should be configured, in its servers/@ file, with the IP address where the instance of tinydns running the private root is.

Content DNS service

One should usually have multiple instances of tinydns, listening on several non-zero IP addresses, that share the same data.cdb database file and that can be brought up and down separately. This will usually be two instances:

Multiple instances of tinydns and axfrdns can share a single data.cdb file with very little overhead. tinydns and axfrdns read the pre-compiled Constant DataBase file off the disc, employing the operating system's disc cache of course and thus sharing the cache between all axfrdns and tinydns processes. They do not each read a source file and create multiple non-shared compiled copies in memory — one of the mis-features of ISC's BIND when Bernstein first invented djbdns that djbdns was designed to avoid.

Given that dnscache and the private tinydns can use maximally-sized DNS/UDP packets with EDNS0 to speak to each other, which DNS/TCP does not improve upon, there is very little utility in running an axfrdns side-by-side on the private root IP address. For the world, however, clients of the world-facing tinydns might not even be able to signal larger than 512-byte DNS/UDP support with EDNS0; and if one has large resource record sets that will hit this 512-byte limit in DNS/UDP responses, one should have an axfrdns running in parallel with tinydns on the same public WAN-reachable IP address (and port 53).

All axfrdns instances should inherit a AXFR environment variable set to a zero-length string. This stops the world from being able to perform "zone transfers" for anything, which is the best default for a world-facing DNS/TCP service. How one later specifies that individual DNS/TCP clients on specific IP addresses get individualized AXFR environment variable values that permit them, specifically, to "zone transfer" specific domain name apices, is a matter of how the axfrdns service is managed and configured. With djbwares and the Bernstein tools, one would have a rules database compiled with tcprules and use the -x flag to tcpserver to tell it where the rules database is.

Split-horizon DNS service from a single shared database

In order to provide a private root content DNS server and public content DNS service from the same data.cdb file, lines in the data source file must be tagged with location codes, as set out in the tinydns-data manual, differentiating on-machine/on-site clients from the rest of Internet.

Although no direct harm can come from it, as the data served are by definition public DNS data, it is not de rigeur to serve root data to the world. Moreover, doing so obviates a useful feaure of tinydns, namely that it makes it always respond to every query whatever the domain name. Normally, and going all of the way back to the Bernstein original, tinydns resists DNS "amplification" attacks by simply not responding to queries about domains that it has not been told that its database is authoritative for. A private root is told that its database is authoritative for every possible domain name.

Thus the private root should be served on-LAN on some IP address that is not routable off-LAN, or — better — on-machine where the instance of dnscache is running. The conventional IP address for the latter is 127.53.0.1 (because tinydns and dnscache both use port 53, the fixed well-known port for Domain Name System services). dnscache starts query resolution with the information published there, and some things end there, with no communication off-site/off-machine at all.

Bogus queries for things that are the results of misconfigurations (e.g. private names being mis-used as top-level domains or because of odd search paths being configured) or for for things that are the results of user mis-use (e.g. pseudo-IP-address domain names such as 203.0.113.60. where users have supplied IP addresses to programs that expect domain names) never escape off-machine/off-site, never place load on the public root content DNS servers, and never send internal site information to those servers and make it visible to all of the people/companies in the network hops along the path that it has to travel to them.

The data served to the world will be the data tagged with the public location codes of DNS clients speaking to (the) tinydns (that is specifically listening) on a public IP address such as 192.0.2.234 or 3fff:0:1977:0905:1979:0305:1:2. The original Bernstein djbdns tutorial contains (somewhat dated, but still relevant) guides on how to receive delegations pointing to that IP address for domain name apices that one owns, and how to set up domain apices, delegations for subdomains, and other data in the data file that is compiled into the data.cdb file.

Synthetic content DNS service

Post-dating by years the last Bernstein-published version of djbdns, version 1.05 in February 2001, is the concept of what IANA now lists as special-use domain names, the data for which are fixed and can be entirely synthesized without any database file. These domain names either have well-known universal uses, or act as reservation placeholders for things that have the form of domain names but are not actually part of the Domain Name System.

The original Bernstein djbdns actually had a synthetic content DNS service, albeit for one highly specialized use: walldns. This has been given more functions in djbwares, and whilst retaining its original name of a "DNS wall" it is now a more general DNS server for all things that can be synthesized without a database. It is the final "DNS wall" in what is nowadays a three-level system:

Conventionally, walldns should be positioned similarly to the private root tinydns, listening on the IP address 127.53.1.1 on the same machine as dnscache. This is the most minimal-resource-use setup for walldns, as DNS lookups that it handles become in effect a mere IPC call over the software loopback network interface on the machine, from dnscache.

Prune-and-graft points

dnscache is configured with prune-and-graft points, in its servers/ subdirectory, for domain apices such as home.arpa. and test., which fall into the in-between category of things that will not be public data, but that are also not invariant-valued. These either point to walldns, serving up null data, or to the private root tinydns, serving up (location tagged) site-local homenet and test data.

RFC 6761 describes lookups that should be handled on-site in one of two ways, switchable by the local network administrator(s) according to local needs, and should never go off-site to public content DNS servers:

All of the RFC 1918 private-use IP addresses should be pruned-and-grafted in this way, and should be done so at the correct highest level (e.g. 10.in-addr.arpa. and 16.172.in-addr.arpa and 168.192.in-addr.arpa., not subdomains thereof).

home.arpa. has similar requirements to test., and the prune-and-graft settings should direct it either to walldns or to tinydns (with internal-location-taged data). In contrast, all of example.org, example.net, example.com, example should not be pruned and grafted to walldns and tinydns. IANA actually owns these and they exist as delegated domains in the global public DNS. It even runs servers for them.