[dns-operations] The perils of retroactive DNSSEC validation
Edward Lewis
Ed.Lewis at neustar.biz
Fri Nov 14 21:21:54 UTC 2008
At 20:57 +0100 11/14/08, Florian Weimer wrote:
>The initiator could set a flag, similarly to the RD bit, which
>requests new data. This has been implemented for HTTP, for instance.
This is an old desire - a "ignore the cache, get new data for me"
bit. I'm at a loss as to why this has never been implemented,
perhaps Paul or someone who has been around the protocol longer than
I recalls the problem.
>DO has nothing to do with validation.
DNSSEC OK bit - it's meant to be set when the querier has a trust
anchor for a domain in which the QNAME sits. (Note: domain, not
zone.) It's not explicitly a "I'm planning to do validation" but
unless a querier plans to engage in DNSSEC somehow, the flag ought to
be off.
But that's not important to this thread - it's why I mentioned the DO
is about all the responder can use to detect if the querier has the
ability or intent to validate.
>> Let's say you get an RRset with a signature valid for November
>> 2008. And for simplicity let's say you have a trust anchor validating
>> the key in the signer field. What does "retroactively validate" mean?
>
>I meant "first populate the cache, then validate", in contrast to
>"validate and store on success".
In the sense of "lazy evaluation" - that is, storing a result and
then using it later on?
I forget the point, but in talking about this with someone else who
dropped by today, perhaps you might think about the caching of failed
data along the lines of negative caching. The benefits and pitfalls
are mostly the same: avoid repetitive queries for potentially still
undesirable results, suffer missing a change on the remote end.
Storing un-validated data has one other benefit, a CD=1 will get the
data regardless. (Useful when diving into a SERVFAIL). Odds are, in
DNS, that the admins are a slow moving lot and not much changes.
(You may argue this, but that is the design image of DNS.)
As far as the "stickiness" of the DNS (in the sense that a change at
the remote end might not be seen by a cache until the cache drops the
older entry), this is not an issue introduced by DNSSEC. The issue
is buried in the very nature of the protocol. DNS was never designed
to be real-time or on-demand, instead it it supposed to be scaleable
and have a low-turnaround time. This is a case of "you can't have
everything" and "there are design tradeoffs".
I'm not saying that the world needs to come to DNS. Sure more
dynamic lookup systems are needed by some applications. For those
applications DNS might be only an 85% solution. DNS is good, but not
perfect. DNSSEC was meant to sure up DNS, DNSSEC can't "fix"
problems without ruining the spirit and original advantages of the
protocol.
Of course there are perils of "post haste" DNSSEC validation. But
the alternatives aren't any more desirable.
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Edward Lewis +1-571-434-5468
NeuStar
Never confuse activity with progress. Activity pays more.
More information about the dns-operations
mailing list