[dns-operations] Implementation of negative trust anchors?

Daniel Kalchev daniel at digsys.bg
Fri Aug 23 17:05:36 UTC 2013

On 23.08.13 19:57, Ralf Weber wrote:
> Moin!
> On 23.08.2013, at 09:19, Paul Vixie <paul at redbarn.org> wrote:
>> if nasa.gov had screwed up its delegation or had allowed its public secondary servers to expire the zone due to primary unreachability, i do not think the phone at comcast would have rung less, but i also don't think that comcast would have fixed nasa's error in local policy. we're only talking about this because DNSSEC is new.
> There is huge difference between DNS outages caused by connectivity and DNSSEC caused outages. Without DNSSEC screwing up your domain so badly that it is unreachable is very very hard. With DNSSEC you make one small error and your domain goes dark for those who validate. Given that the cost of this is not on the domain owner, but instead on the service providers that validate. I think it is absolutely needed to give them a tool to minimize these costs (NTA).

Paul is correct. Everyone blames DNSSEC, because it is new.

When you learn DNSSEC procedures and master them, you will discover it 
is not "easy" to screw up DNSSEC either.

Once upon a time people were afraid to fly. Today they happily line up 
at airport gates.

What is absolutely needed is to move the validation to the stub resolver 
and remove it from the caching resolver that is operated by a "service 
provider". Any service provider will attempt to cut costs, at any price. 
No need to put the burden of validating DNSSEC on the resolver, as they 
don't have any use of this -- when stubs validate, cache corruption is 
not even a problem.


More information about the dns-operations mailing list