[dns-operations] OpenDNS adopts DNSCurve
vixie at isc.org
Wed Feb 24 22:55:11 UTC 2010
> Date: Wed, 24 Feb 2010 14:39:44 -0800
> From: Matthew Dempsky <matthew at dempsky.org>
> On Wed, Feb 24, 2010 at 2:25 PM, Paul Vixie <vixie at isc.org> wrote:
> > "resource strapped" is not what i said.
> You said embedded.
here's what i said:
> From: Paul Vixie <vixie at isc.org>
> Date: Wed, 24 Feb 2010 21:50:35 +0000
> > The root zone file is available with PGP signatures, so if a TLD were
> > to support DNSCurve, recursive servers could extract the appropriate
> > NS records from the root zone file to setup as a trust anchor.
> > Also, some TLD zone files (in particular, .com and .net) are also
> > available for download with PGP signatures, and a trusted party with
> > access to them could republish just the zones with NS records
> > indicating DNSCurve support.
> this kind of heavy weight metadata model may fit the needs of opendns
> and other large scale outsourced recursive dns providers, but it won't
> fit into the small scale widely-distributed in-house / embedded model
> that DNS (and DNSSEC) uses today. is that intentional? (i ask, since
> you are both an opendns employee and a dnscurve developer.)
"embedded" follows a forward slash (/) which in english textual discourse
means "or". and, the thing i'm replying to refers to on-server PGP and
the fetching of TLD zone files.
> When I think embedded, I think of something like a Linksys WRT54G or
> similar. One of those might be too resource strapped to occasionally
> download a trust anchor file or to use something DLV-like for distribute
> DNSCurve names, but then it's probably not going to be doing DNSCurve
> anyway, which was my point.
since a lot of WRT54G's out there are running software which includes a
recursive nameserver today, the choice of "don't run dnscurve" is a change
to the model, which was my point. the dnscurve model favours large complex
servers such as those inside opendns. the dnssec model fits a WRT54G fine,
even with RFC 5011 thrown in.
> > for the vast number of autonomous in-house non-outsourced caching
> > resolvers out there, adding validation per DNSSEC is no big deal.
> When you say "adding validation" do you mean writing their own DNSSEC
> implementation, or do you mean upgrading their software to a DNSSEC-
> supporting implementation?
i expect a mix of both.
> Also, are you counting the effort to upgrade stub resolvers to all do
> DNSSEC validation?
i expect these to mostly use TSIG or SIG(0), and depend on the AD bit, with
TSIG keys made into a DHCP parameter. or, since DHCP isn't very secure in
its own right, i expect clients to just set the AD bit and to depend on
being on-subnet with their recursives (hoping therefore not to be spoofed,
though i expect that djb's udp source port randomization to also be used,
which should minimize the chances of getting bad data even if spoofing of
on-subnet IP sources is allowed from off-subnet, which isn't very common.)
> You're saying this like DNSSEC validation and its trust anchors and
> rollover scheme and DLV systems have no complexity,
no, i'm saying this can be done relatively statelessly and using only UDP
and using so few filesystem writes that a flash drive without wear levelling
won't be put out by it.
> or that downloading and verifying a file can't be automated and/or
> integrated into resolvers.
it absolutely can be integrated that way. but at the systemic global level,
we have to count not just the number of transistors but also the number of
sysadmins. a WRT54G probably has more transistors and can run larger
programs faster, than could a VAX 730. however, in the heydey of VAX 730's
there were probably never more than 10X more VAX 730's in the world than
there were sysadmins.
> Anyway, if distributing a zone file is so unpalatable, something
> DLV-like would be feasible too.
that's the direction i encourage you to research, vs. per-node FTP and PGP.
More information about the dns-operations