Conventionally in a modern scenario one needs to run three, possibly four, of the djbwares toolset's Domain Name System services:
an instance of dnscache
to provide resolving proxy DNS service to the DNS client libraries built into applications;
an instance of tinydns
to provide a private root content DNS server;
an instance of tinydns
to provide public content DNS service;
an instance of axfrdns
to provide public content DNS service (if necessary); and
an instance of walldns
to provide various fixed DNS data.
This differs from Daniel J. Bernstein's original djbdns, which was aimed at the world of 1999 and 2000.
Several norms now, were not norms then.
Indeed, running a private root content DNS server, now a mainstream thing, was pioneered by djbdns and the BIND people were still hemming and hawing about it, despite it having been successfully practiced in the djbdns world for all of that time, two decades later before finally seeing the light.
There are now many reasons for running walldns
, where that was a very specialized service at the turn of the 21st century.
Exactly how one runs dnscache
, tinydns
, axfrdns
, and walldns
as services is beyond the scope of this Guide.
The various -conf utilities of the original djbdns are long-since obsolete, and service definitions/bundles for specific service management systems (from daemontools through runit to nosh) have become available elsewhere over the years.
As mentioned, the instance dnscache
provides resolving proxy DNS service to the DNS client libraries built into applications.
It is listed in the DNSCACHEIP
environment variable as documented in the djbdns-client
manual, or in the /etc/resolv.conf
file.
It should be reachable at best only to machines on the same LAN, and the normal configuration is for each machine to run its own instance of the service listening on the 127.0.0.1 (or the 0:0:0:0:0:0:0:1) IP address only reachable from the machine itself. This IP address is the hardwired out-of-the-box fallback default in most DNS client libraries, sans any configuration being done at all. The antics of some multinational corporations (e.g. CloudFlare, Quad9, and Google) notwithstanding, proxy DNS service is not something to be providing to Internet at large.
dnscache
should be configured, in its servers/@
file, with the IP address where the instance of tinydns
running the private root is.
One should usually have multiple instances of tinydns, listening on several non-zero IP addresses, that share the same data.cdb
database file and that can be brought up and down separately.
This will usually be two instances:
Multiple instances of tinydns
and axfrdns
can share a single data.cdb
file with very little overhead.
tinydns and axfrdns read the pre-compiled Constant DataBase file off the disc, employing the operating system's disc cache of course and thus sharing the cache between all axfrdns and tinydns processes.
They do not each read a source file and create multiple non-shared compiled copies in memory — one of the mis-features of ISC's BIND when Bernstein first invented djbdns that djbdns was designed to avoid.
Given that dnscache and the private tinydns can use maximally-sized DNS/UDP packets with EDNS0 to speak to each other, which DNS/TCP does not improve upon, there is very little utility in running an axfrdns side-by-side on the private root IP address.
For the world, however, clients of the world-facing tinydns might not even be able to signal larger than 512-byte DNS/UDP support with EDNS0; and if one has large resource record sets that will hit this 512-byte limit in DNS/UDP responses, one should have an axfrdns
running in parallel with tinydns on the same public WAN-reachable IP address (and port 53).
All axfrdns
instances should inherit a AXFR
environment variable set to a zero-length string.
This stops the world from being able to perform "zone transfers" for anything, which is the best default for a world-facing DNS/TCP service.
How one later specifies that individual DNS/TCP clients on specific IP addresses get individualized AXFR
environment variable values that permit them, specifically, to "zone transfer" specific domain name apices, is a matter of how the axfrdns service is managed and configured.
With djbwares and the Bernstein tools, one would have a rules database compiled with tcprules
and use the -x
flag to tcpserver
to tell it where the rules database is.
In order to provide a private root content DNS server and public content DNS service from the same data.cdb
file, lines in the data
source file must be tagged with location codes, as set out in the tinydns-data
manual, differentiating on-machine/on-site clients from the rest of Internet.
Although no direct harm can come from it, as the data served are by definition public DNS data, it is not de rigeur to serve root data to the world.
Moreover, doing so obviates a useful feaure of tinydns
, namely that it makes it always respond to every query whatever the domain name.
Normally, and going all of the way back to the Bernstein original, tinydns resists DNS "amplification" attacks by simply not responding to queries about domains that it has not been told that its database is authoritative for.
A private root is told that its database is authoritative for every possible domain name.
Thus the private root should be served on-LAN on some IP address that is not routable off-LAN, or — better — on-machine where the instance of dnscache is running. The conventional IP address for the latter is 127.53.0.1 (because tinydns and dnscache both use port 53, the fixed well-known port for Domain Name System services). dnscache starts query resolution with the information published there, and some things end there, with no communication off-site/off-machine at all.
Bogus queries for things that are the results of misconfigurations (e.g. private names being mis-used as top-level domains or because of odd search paths being configured) or for for things that are the results of user mis-use (e.g. pseudo-IP-address domain names such as 203.0.113.60.
where users have supplied IP addresses to programs that expect domain names) never escape off-machine/off-site, never place load on the public root content DNS servers, and never send internal site information to those servers and make it visible to all of the people/companies in the network hops along the path that it has to travel to them.
The data served to the world will be the data tagged with the public location codes of DNS clients speaking to (the) tinydns (that is specifically listening) on a public IP address such as 192.0.2.234 or 3fff:0:1977:0905:1979:0305:1:2.
The original Bernstein djbdns tutorial contains (somewhat dated, but still relevant) guides on how to receive delegations pointing to that IP address for domain name apices that one owns, and how to set up domain apices, delegations for subdomains, and other data in the data
file that is compiled into the data.cdb
file.
Post-dating by years the last Bernstein-published version of djbdns, version 1.05 in February 2001, is the concept of what IANA now lists as special-use domain names, the data for which are fixed and can be entirely synthesized without any database file. These domain names either have well-known universal uses, or act as reservation placeholders for things that have the form of domain names but are not actually part of the Domain Name System.
The original Bernstein djbdns actually had a synthetic content DNS service, albeit for one highly specialized use: walldns
.
This has been given more functions in djbwares, and whilst retaining its original name of a "DNS wall" it is now a more general DNS server for all things that can be synthesized without a database.
It is the final "DNS wall" in what is nowadays a three-level system:
DNS client libraries such as djbdns-client
synthesize responses for a few well-known domain names, and queries for the resource records for those domain names never get sent to any DNS server at all.
Certain things are either fundamental, and work even when there are not any DNS servers running (e.g. the domain name localhost.
), or are fundamentally excluded from DNS servers (e.g. invalid.
and all of its subdomains).
Also observe that a couple of RFCs mandate that DNS client libraries send queries about some special-use domain names off to specalist DNS servers that are not the main dnscache.
The main dnscache
provides resolving proxy DNS service and not only synthesizes most of the same answers as the DNS client libraries do, for the benefits of old DNS client libraries that do not synthesize answers internally (including the one in the original Bernstein djbdns), but synthesizes various additional things that RFCs mandate be synthesized at this level.
These range from things that should not have reached dnscache and definitely should not go any further out to Internet at large (e.g. onion.
names) to things that should not be loaded onto public content servers for privacy/efficiency reasons and their data are fixed and well-known.
walldns
acts as a final catch-all, not only synthesizing the same stuff as the other two layers, for the benefits of old djbwares or non-djbwares DNS clients and proxy DNS servers (the exact mechanics of pointing them to walldns being beyond the scope of this Guide), but providing things that would be invariant from public DNS content servers (e.g. the null test.
) or that provide optional substitutes for the public DNS (e.g. its own original "reverse lookup opaque wall" and an optional null example.
, albeit that IANA provides public data for that latter).
Conventionally, walldns
should be positioned similarly to the private root tinydns
, listening on the IP address 127.53.1.1 on the same machine as dnscache
.
This is the most minimal-resource-use setup for walldns, as DNS lookups that it handles become in effect a mere IPC call over the software loopback network interface on the machine, from dnscache.
dnscache is configured with prune-and-graft points, in its servers/
subdirectory, for domain apices such as home.arpa.
and test.
, which fall into the in-between category of things that will not be public data, but that are also not invariant-valued.
These either point to walldns, serving up null data, or to the private root tinydns, serving up (location tagged) site-local homenet and test data.
RFC 6761 describes lookups that should be handled on-site in one of two ways, switchable by the local network administrator(s) according to local needs, and should never go off-site to public content DNS servers:
test.
has two modes: either there are no local test domains, in which case the prune-and-graft settings should direct dnscache to talk to walldns (which serves a null version of test.
and its subdomains), or there are local test domains, in which case the prune-and-graft settings should direct dnscache to talk to tinydns (which will publish the test data, location-tagged for that dnscache only).
The reverse-lookup domain names used to map RFC 1918 private-use IP addresses (e.g. 1.0.168.192.in-addr.arpa.
) have two modes: either there are no local names associated with these IP addresses, in which case the prune-and-graft settings should direct dnscache to talk to walldns (which serves its original "DNS wall" for this purpose), or there are local names, in which case the prune-and-graft settings should direct dnscache to talk to tinydns (which will publish the name data, location-tagged for that dnscache only).
All of the RFC 1918 private-use IP addresses should be pruned-and-grafted in this way, and should be done so at the correct highest level (e.g. 10.in-addr.arpa.
and 16.172.in-addr.arpa
and 168.192.in-addr.arpa.
, not subdomains thereof).
home.arpa.
has similar requirements to test.
, and the prune-and-graft settings should direct it either to walldns or to tinydns (with internal-location-taged data).
In contrast, all of example.org
, example.net
, example.com
, example
should not be pruned and grafted to walldns and tinydns.
IANA actually owns these and they exist as delegated domains in the global public DNS.
It even runs servers for them.