[dns-operations] resolvers considered harmful

Matthew Pounsett matt at conundrum.com
Thu Oct 23 22:42:49 UTC 2014



> On Oct 23, 2014, at 18:23, Matthew Pounsett <matt at conundrum.com> wrote:
> 
> The cache hit rate may level off, but the query rate to the caching recursive doesn’t.  

Sorry, that should have said the cache *miss* rate. It's an asymptote maxing out at the TTL, dependent on the population behind the cache. 

> The key variable is at what point (how many users) cache misses start to occur every $TTL seconds.    It’s at that point that a shared caching server becomes critical.  
> 
> Say for example that occurs at 1,000 users.  In that case, at >1000 users there is a linear relationship between queries sent by clients and queries blocked from hitting the authoritative servers.  If that’s the number, then in an infrastructure with a million users that caching server is saving the authoritative servers from orders of magnitude increase in queries for a particular name, not 2x or 3x as you claim.
> 
> I don’t see where you’ve done the work that allows you to extrapolate your numbers to the Internet at large.  Your tiny sample just isn’t representative of the caching recursive servers that handle the majority of the Internet’s queries.
> 
> As a TLD operator, and the operator of some very busy second level authoritative servers, I don’t care about the offices or neighbourhoods of 100 people behind a single caching resolver suddenly deciding they should all run their own resolvers, and bypass the local cache.  That’s a tiny drop in the bucket compared to the hundreds of millions of users behind a small handful –– low tens of thousands –– of caching resolvers.   If we have that situation you can expect your $15/yr domain registration to be more like $50 or $60, and your $15/mo hosting plan that comes with DNS services to start charging you similarly 
> 
>>>> - There is also a philosophical-yet-practical argument here.  That is,
>>>>  if I want to bypass all the shared resolver junk between my
>>>>  laptop and the auth servers I can do that now.  And, it seems to
>>>>  me that even given all the arguments against bypassing a shared
>>>>  resolver that should be viewed as at least a rational choice.
>>>>  So, in this case the auth zones just have to cope with what shows
>>>>  up.  So, do we believe that it is incumbent upon (say) AT&T to
>>>>  provide shared resolvers to shield (say) Google from a portion of
>>>>  the DNS load?
>>> 
>>> It doesn’t look to me like your paper has done anything to capture
>>> what it looks like behind AT&T’s resolvers, so I’m not sure how you
>>> can come to that sort of conclusion.
>> 
>> Correct.  This is a thought experiment with exemplars that I gave names
>> to.
> 
> Your exemplars don’t match your experiment.  You appear to be trying to make an economic argument for a major change to the infrastructure based on an unrepresentative sample.
> 
> You ignored my comments on how TTLs in delegation-centric zones like TLDs actually work.. it seems to me there are some bad assumptions about the behaviour of the DNS underlying this work that make it hard to use it to suggest any sort of change to current operations.
> 
> 
> _______________________________________________
> dns-operations mailing list
> dns-operations at lists.dns-oarc.net
> https://lists.dns-oarc.net/mailman/listinfo/dns-operations
> dns-jobs mailing list
> https://lists.dns-oarc.net/mailman/listinfo/dns-jobs
> 




More information about the dns-operations mailing list