[dns-operations] RIPE-52 preso on DNS issues, author comments on Slashdot.
Roy Arends
roy at dnss.ec
Wed Apr 26 22:24:43 UTC 2006
On Apr 26, 2006, at 8:16 PM, Roland Dobbins wrote:
>
> http://www.ripe.net/ripe/meetings/ripe-52/presentations/uploads/
> Wednesday/sirer-
> perils_of_transitive_trust_in_the_domain_name_system.pps
>
> http://www.cs.cornell.edu/people/egs/beehive/dnssurvey.html
I was there (at ripe), saw the presentation, and it was more
marketing than science.
First off, the survey mentioned tracked dependencies on servers which
had names not in the delegated zone nor its own (out of bailiwick).
Some dependency graphs showed more the 600 nodes. The survey sorted
names by node and went on by saying 'the higher the dependeny, the
more vulnerable a name', blatently handwaving that a wide graph might
be good (more nameservers per zone cut) and long graph might be bad
(long resolution). The conclusions in the end were twofold. 1: the
old wisdom of having more server dependency is bad. And 2: A new form
of DNS is needed.
Meanwhile, cache was never mentioned, less server dependency=spof was
never mentioned. The big message was that somebody who abuses a
vulnerability in one of those 600 nodes, would 0wnz (sic) the name,
while in my point of view, a hacker would own some part of the
resolution graph, depending on where this vulnerable node hangs in
the tree, and not automagically the entire name.
To add some sugar to this, the presentation went on to show that 17
percent of the tested servers had 'known' vulnerabilities, which then
related to 45 % if the names being triviable hijackable, though no
accurate methodology was given.
The authors of the paper (that resulted in this presentation) came to
the conclusion that server dependencies using out-of-bailiwick
servers in DNS is a bad thing, and hence, there was a new kind of DNS
needed.
It turned out that this 'new DNS' was already defined: some form of
DNS using distributed hash tables: beehive codons.
This was no less than a marketing talk.
Meanwhile, the codons server set itself has issues. It responds to
responses. A few packets would effectively bring the whole codons
infrastructure down.
Sure, these bugs can be fixed. But if that argument is allowed for
codons, it should be allowed for generic dns implementations. The
authors made the mistake of confusing protocol with implementation.
Building a new protocol based on the fact that there exist
vulnerabilities in implemetations is just wrong. I'd argue that these
vulnerabilities are found due to the popularity of the protocol.
As for resolution graphs, I think this is an interesting study/survey
that can be done, as long as its not biased.
A marketing study. Not science. Nothing original and nothing new.
Scare tactics at most.
Roy
PS, the codons folk have been informed about the vulnerability in
their software.
More information about the dns-operations
mailing list