[dns-operations] resolvers considered harmful

Mark Allman mallman at icir.org
Thu Oct 23 19:27:35 UTC 2014


> cache hit rate is about 80%-90% for those caching you think can be
> removed. Note that this cache hit rate is heavilly skewed because of
> the facebook "one time" uncachable hostnames they were using at the
> time. If you also include the fact that these caches were feeding
> other caches, you will see the enormous amount of queries you are
> suggesting to unleash on authoritative nameservers on the internet.

Right... Our cache hit rate is somewhere in the two-thirds area.  At
least the last time I looked.  (We have some notions from a couple years
ago in http://www.icir.org/mallman/pubs/CAR13/ .)  But, one important
bit here is that while this does increase the load, the load is
distributed.  So, it isn't like we're landing all the load one given
point in the network.  And, it is distributed proportionally to the
popularity of the underlying services (which intuitively seems about
right to me).

> >  - And, I'd spin this around on you ... You clearly care about your 3
> >    poor nameservers.  That is natural and rational.  But, why do you
> >    think it is someone else's job to run a cache to shield you from
> >    load?
> 
> If you really believe the mode of the internet should be that the
> weakest device should be able to deal with the largest volume load the
> world can throw at it, there is not much point discussing this
> further.  I'm just happy that people like Van Jacobson designed the
> internet.
> 
> > Why should we at ICSI run a shared resolver for your benefit?
> 
> Because thousands of ISP run caches for your servers' benefit.
> Using your reasoning, we should drop all the exponential backoff
> in our TCP/IP protocols. You'll just have to deal with the load,
> and if you get blasted off the net it's clear your fault for being
> underpowered.

I think this is a really bad analogy.  I do happen to know something
about congestion control.  Maybe even two things!

Congestion control is a shared set of algorithms / strategies for
dealing with the case when some shared piece of infrastructure is
over-committed.  For instance, a link in the middle of the net.  It
isn't any one person's/host's fault that it is over-capacity.  So, we
agree on a set of techniques that we can all use to reasonably backoff
and share the link so nobody starves.

We do not point at the owner of the link and say "hey, we're trying to
use more capacity than you have so buy some more capacity!".  I.e., we
don't impose on someone else to add resources on our behalf.  Rather, we
decide to all play nice and so we all get something done (even if slower
than we'd really like).  Congestion control is about coping with a
less-than-ideal shared reality.

But, this is not at all like your nameserver.  Your nameserver is not
shared infrastructure that zillions of disparate people all happen to
(over) use.  It is infrastructure that works on your behalf to serve
names that you want served.  If it can't handle the load then there is a
clear culprit: you.  I.e., your popularity has outgrown your resources.
Why should that be my problem?  We don't have google telling us to all
fire up an institutional HTTP cache on our networks because it has run
out of capacity and its our problem to fix.

Finally, it isn't that I believe the "weakest device should be able to
deal with the largest volume load the world can throw at it".  Rather, I
believe if someone is providing a service, then they should be
responsible for provisioning for the load that service incurs (or
dealing with the suboptimal performance).  I wouldn't have thought that
a controversial notion.

allman



-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 180 bytes
Desc: not available
URL: <https://lists.dns-oarc.net/pipermail/dns-operations/attachments/20141023/c5e9bfe9/attachment.sig>


More information about the dns-operations mailing list