[dns-operations] Load balancing DNS queries across many machines

Michael Sinatra michael at rancid.berkeley.edu
Fri Jul 17 00:48:19 UTC 2009

On 7/16/09 4:37 PM, Matthew Dempsky wrote:
> What do large authoritative zones currently do about load balancing
> DNS queries across many machines?  E.g., .com and .net have 15 IP
> addresses (13 IPv4 + 2 IPv6) listed to handle queries, but I think
> Verisign has many more machines than this to handle the load.
> I know anycast is often used to help divide the load across multiple
> sites, but what do zones do about splitting load across multiple
> machines at a single site? 


> Do they anycast individual machines and
> just rely on multipath routing to load balance, or put all of the
> machines on the same network and use VRRP or CARP, or do any sites use
> higher level protocols for load balancing?

anycast + equal-cost multipath routing via an IGP such as OSPF or IS-IS. 
UC Berkeley has been doing this since 1999.  I am sure there are others 
who have been doing it longer.

> I ask because a current deployment path for DNSCurve for authoritative
> zone is to have admins to setup a DNSCurve-to-DNS forwarder, which
> transparently handles DNSCurve for the existing servers (similar to
> HTTPS-to-HTTP forwarders).  However, two downsides to this approach
> are 1) the forwarder needs to maintain some state to be able to
> encrypt and forward response packets and 2) the DNS server doesn't
> know the original source address for logging and/or client
> differentiation.
> One solution to this is for the forwarder to forward the DNS packet
> along with the source address (and port) and some extra state bytes.
> The backend server would then respond with a DNS packet and echo back
> the extra information given, so the forwarder can know what to do with
> the response.
> I suspect if any existing large sites do application-level load
> balancing of DNS queries, they've probably come up with a similar
> solution.  Also, because this new backend protocol would require
> authoritative server support, it seems worthwhile to try to build on
> existing practice rather than reinvent the wheel if possible.

I think about trade-offs in high-availability design, and I am sure 
someone already came up with this adage (and hopefully named it after 
them, like Diffendorfer's Law or something), but my experience is that 
the more state you put into a protocol, the harder it is to use 
redundancy as an easy way to improve availability.  Put a different way, 
redundancy produces less reliability than it otherwise would when there 
is state maintenance involved.

The nice thing about DNS is that it is amenable to anycast techniques 
because there is very little state in the protocol, not just because it 
typically uses UDP, but because there's very little state at the 
application level that needs to be tracked.  DHCP needs to track leases, 
single-sign-on services need to track authnz tickets, etc.  DNS doesn't 
need to do any of that which is why it and NTP are so easy to make 
redundant via simple methods like CARP or anycast.

The only real state that a DNS client needs to maintain is the 
source/dest ports and IP addresses of the query and the query ID for all 
outstanding queries.  Having said that, if a DNScurve forwarder sends a 
query to a non-DNScurve DNS server and then dies, the non-DNScurve DNS 
server will presumably send the response to another DNScurve fowarder 
based on whatever redundancy protocol is being used (CARP, anycast, 
etc).  The new forwarder should promptly drop the query because it 
doesn't have the right query ID and source/destination port, etc.  So 
eventually, the client will have to time out anyway.  So it would seem 
that even if you encode the original client data into the packet, there 
will still be time-outs if forwarders fail, but the overall blip should 
be fairly short once the failover is complete.  Therefore, I am not 
really sure what you gain by adding the state bits you suggest.

However, you also seem to hint that the "new backend protocol" isn't 
DNS, so there may be some mechanism in the protocol to deal with this. 
If that's the case, then I am not sure how your method "transparently 
handles DNSCurve for the existing servers," since I would assume that 
the existing servers are plain DNS servers.  Is the new backend protocol 
documented on the DNScurve website?


More information about the dns-operations mailing list