[dns-operations] Geoff Huston on DNS-over-TCP-only study.

George Michaelson ggm at apnic.net
Wed Aug 21 06:27:12 UTC 2013


Thanks for the clarification. We did in fact detect initial configuration
issues with the default TCP 3 backlog, but once we'd put this up to 2000 we
only had one brief window of RST congestion as detected by a simple TCP
filter. This test was for a domainspace which serves around 250,000
experiments per day, each representing 4 DNS queries, none of which could
be cached. So it was at 1,000,000 q/day which is obviously a low sustained
query rate of around 10 q/sec. I suspect with better kernel knowledge we
could have avoided any server forced RST and serve a higher load.
Certainly, a TCP based DNS service faces a lot of questions about how its
designed and scaled.

I believe our goal was to find out how many clients, measured by resolver,
failed to complete a TCP forced DNS query. Other people will be looking at
the server side, that wasn't what we were primarily exploring. People who
want to consider TCP based DNS need both sides of the questionspace filled,
so choosing to analyse client failure isn't the whole picture, but it is
part of the picture.

Your canard reply makes much better contextual sense now

cheers

-george


On Wed, Aug 21, 2013 at 4:16 PM, Paul Vixie <paul at redbarn.org> wrote:

>
>
> George Michaelson wrote:
> > ...
> > So, while I understand we're not DNS experts and we may well have made
> some mistakes, I think a one word 'canard' isn't helping.
>
> there is no way to either get to or live in a world where dns usually
> requires tcp. there would be way too much state. most people are capable
> of writing the one-line perl script that will put a dns responder into
> tcp exhaustion and keep it there at very little cost to the attacker,
> but those same people can read section 5 of RFC 5966 and not see the
> threat. granted that if all name servers miraculously implemented the
> recommendation "servers MAY impose limits on the number of concurrent
> TCP connections being handled for any particular client" then the perl
> script would have to be longer than one line, there's just no world there.
>
> had the original dns tcp protocol been structured so that the server
> closes and the clients won't syslog anybody or otherwise freak out when
> the server closes, we could imagine a high transaction rate on
> short-lived connections. tcp's 3xRTT and 7-packet minimum would seem
> harsh but at least we'd have some hope of goodput during deliberate
> congestion attacks.
>
> an experiment that looks at this from the client's point of view tells
> us nothing about the server's availability during congestion. i could
> wish that measurements of tcp dns performance would include a caveat
> such as "this has not been tested at internet scale" or even
> "internet-wide dependence on dns tcp may be vulnerable to trivial denial
> of service attacks".
>
> almost everybody who looks at this says "just use TCP". if the solution
> to the bcp38 problem in DNS were that easy, we would not have written
> <
> https://www.usenix.org/legacy/publications/login/2009-12/openpdfs/metzger.pdf
> >
> and william would not have written RFC 6013.
>
> it's also worth looking again to
> <http://tools.ietf.org/html/draft-eastlake-dnsext-cookies-02>.
>
> vixie
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.dns-oarc.net/pipermail/dns-operations/attachments/20130821/488226cf/attachment.html>


More information about the dns-operations mailing list