[dns-operations] resolvers considered harmful

Andrew Sullivan ajs at anvilwalrusden.com
Wed Oct 22 21:04:01 UTC 2014

On Wed, Oct 22, 2014 at 02:30:08PM -0400, Mark Allman wrote:
> I absolutely agree.  Please read section 5 which addresses exactly this
> question.  We use .com as an example of a popular authoritative domain
> in this work.

The problem with com is that it's not a good example.  It's a
delegation-centric domain and therefore it's one where TTLs of some
length might in fact be a viable option.

The cases I'm thinking about are the overwhelmingly-successful web
properties.  You used one (Google), but think also Facebook, Twitter,
Netflix, and so on.

For these people, there's an upside and a downside.  On the upside,
they get to bet more often that the source IP of the query is in fact
network-topologically or geographically (or both) closer to the
eyeballs that will view the content.  This has been difficult, and has
led to controversial ideas like edns0-client-subnet.  On the downside,
these kinds of operators use very short TTLs for their infrastruture
in order to improve their flexibility.  Owing to the conentration in
the recursive resolver population (at least in the US), even with very
short TTLs there appears to be a sigificant reduction in the volume of
queries at the authoritative server, because the population behind a
given recursive service is large.  It's true that the increase in your
numbers doesn't seem to be as bad as the theoretical maximium, bit 2.5
or 3 times several million is still a large number.  A billion here, a
billion there …

Again, I don't think this is a knock down argument about the approach,
but I do think the cost is rather larger in certain cases than the
paper seems to be acknowledging.


Andrew Sullivan
ajs at anvilwalrusden.com

More information about the dns-operations mailing list