[dns-operations] NSEC3PARAM iteration count update

Viktor Dukhovni ietf-dane at dukhovni.org
Sun Jan 7 19:56:28 UTC 2018



> On Jan 7, 2018, at 1:55 PM, Olafur Gudmundsson <ogud at ogud.com> wrote:
> 
> Sorry for the late response,

The issue is longstanding so there's no rush, your follow-up is appreciated.

>> Of the 453 domains with iteration counts above 150 only 4 have counts
>> in excess of 2500, which are unsupported by many resolvers with the
>> default RFC5155 iteration count limits.  The remaining "interesting"
>> domains are the 449 with iterations in the interval [151,2500].
> 
> RFC5155 advice is in hindsight bad, it was written from the point of
> “more work is better protection”.
> Advances in graphics cards have shown that NSEC3 is nothing but an
> obfuscation mechanism, [...]

The sparse signing (opt-out bit) feature of NSEC3 was and I think still is
useful, despite the fact that it is sometimes misused by small "leaf"
domains, that don't have a large number of insecure delegations.

The salt value and iteration counts above 0 (i.e. 1 as you note below)
turned out to be largely counter-productive.  It seems that Verisign,
for example, understand this quite clearly.  The ".com" zone has an
empty salt, 0 iterations, but uses opt-out:

   CK0POJMG874LJREF7EFN8430QVIT8BSM.com. NSEC3 1 1 0 - CK0Q1GIN43N1ARRC9OSM6QPQR81H5M9A  NS SOA RRSIG DNSKEY NSEC3PARAM


> The harm from NSEC3 iterations if mainly felt be resolvers, but it is
> easy to generate an attack against authoritative servers that serve
> zones with high iteration counts causing them to fall over. 

Yes, I expect the zone with an iteration count of 65535 would take
a noticeable CPU hit at very modest query rates:

   $ openssl speed sha1
   ...
   Doing sha1 for 3s on 64 size blocks: 10499108 sha1's in 3.01s
   ...

So, on e.g. my CPU, 65536 iterations of sha1 would take 18ms, so the
server consumes ~1 CPU for just ~54 queries / sec!


> I would like to see someone write an RFC saying Max iteration count of  <= 10 for all algorithms.

If the number is to be variable (and not just "0", really 1, like ".com")
then the upper bound should be a power of 2.  So the draft could choose:

   0 - Accept the reality that further hashing is futile.
  15 - Support a modest additional work-factor.
 127 - This covers the vast majority of deployed systems as counts above
       100 are quite infrequent.

That said, the reduced limit would for quite some time apply only to servers,
resolvers would still need to support the RFC5155 limits for some time, as it
will be a while before the message gets out to all the server operators and
domain owners who'd need to make changes.

>> Of these:
>> 
>> * 258 have 512-bit P256 (algorithm 13) keys and 300 iterations.  This
>>   exceeds the RFC5155 iteration limits and breaks secure DoE for many
>>   resolvers.  All these domains are hosted at "ns1.desec.io”.
> Nit: P256 keys are 256 bit long but have two 256 bit numbers in the key, but
> equivalent to 3100 bit RSA key.

Yes, I know.  I am not sure how resolvers determine the key length.  Is it
from the bit count of the published key, or the algorithm bit strength?
Either way though, the number is < 1024, so the limit of 150 is applied
by many resolvers, in particular "unbound" (and IIRC BIND) in their default
configurations.

>> [...] So the problem described in the draft exists in the wild,
>> but is, for the moment at least, quite infrequent.  The vast majority of
>> domains use sensibly low counts (with 1 being the most popular value, though
>> frankly 0 would have done just as well, but is perhaps not as well understood).
> 
> I think NSEC3 spec is counterproductive in specifying the iteration count
> as additional iteration so value 1 is actually two iterations.

Yes, I think this has some operators unsure of the meaning, and so they
use "1".  I think more would have chosen "0" (that is actually "1") if
the meaning were more obvious.

>> With a bit of luck, better documentation and tools that warn users to
>> not exceed 150 (regardless of key size) will keep the problem largely
>> in check.
> 
> 150 is still to high IMHO, 

As I mentioned above, sadly resolvers still need to support the
RFC5155 limits for some time.  You're right of course that if
we're clarifying the limits for authoritative servers we may as
well make the changes that make most sense in hindsight.  So
do you think that the server limits be should be "0", "15" or
"127"?

One might even encourage the maintainers of authoritative server
software to artificially cap the user-specified iteration count
to the recommended value unless an additional "I really mean it"
override is also configured.  With that, the next zone resigning
would have an iteration count of max(configured, 5155bis limit).

>> So an update to RFC5155 that sets a flat iteration limit of 127 and
>> reserves the leading 9 bits of the iteration count would IMHO be a
>> good idea.
> 
> ^^^ Retiring NSEC3 is a better idea,

I don't think that's practical, it is too widely deployed, and retiring
it would be too disruptive.  I want DNSSEC adoption to grow, and not be
disrupted by another major protocol change.

NSEC3 can be simplified:

  1.  Encourage users to not bother with "salt", an empty salt saves some
      bandwidth and CPU and is just as effective.  It is too difficult to
      change salt values anyway once a zone is signed.

  2.  Encourage users to set the iteration count to 0 (really 1), cap it
      at one of 0, 15 or 127 (based on WG/IETF consensus).

> and NSEC5 is not the solution, 

Indeed, in my view that falls into the "disrupted by another major protocol
change" category.  Cool crypto research, but not operationally sound.

>> In any case, protocols with integral fields where only a subset of the
>> values is supported, and the supported set depends on other parameters
>> is a design feature that should be avoided.
> 
> +1 

Thanks for the moral support.

-- 
	Viktor.





More information about the dns-operations mailing list