[dns-operations] [Ext] Obsoleting 1024-bit RSA ZSKs (move to 1280 or algorithm 13)
Brian Dickson
brian.peter.dickson at gmail.com
Fri Oct 22 03:28:15 UTC 2021
On Thu, Oct 21, 2021 at 7:54 PM George Michaelson <ggm at algebras.org> wrote:
> I would be concerned that the language which makes the recommendation
> HAS to also note the operational problems. You alluded to the UDP
> packetsize problem. And implicitly the V6 fragmentation problem. What
> about the functional limitations of the HSM and associated signing
> hardware? I checked, and the units we operate (for other purposes than
> DNSSEC) don't support RSA1280. They do RSA1024 or RSA2048. This is
> analogous to the recommendation I frequently make casually, to stop
> using RSA and move to the shorter cryptographic signature algorithms
> to bypass the size problem: They are slower, and they aren't supported
> by some hardware cryptographic modules.
>
Okay, yes, this was something I wasn't taking into consideration.
(My apologies to everyone.)
Everything is, to some degree or another, a trade-off.
So, out of curiosity (and for a single data point I suppose), which non-RSA
algorithms does your HSM support?
If it includes one of the elliptic curve algorithms, I think the
interesting thing would be the respective multipliers on slowdown and
crypto strength (work factor).
E.g. a 50x slowdown which produces, say, a 1000x work factor increase,
would be worth considering seriously, but it is unclear what the work
factor increase would be.
I think additionally, anyone looking at what to do would probably need to
determine two parameters:
- Natural signing rate (e.g due to changes in data to be signed)
- Re-signing time (speed x number of entries)
There are places on the performance curves that are unsupportable, e.g.
when the number of entries is large enough and the natural signing rate is
high enough, that the re-signing time becomes infinite.
In that situation, there are not a lot of alternatives: replace the HSM(s);
scale horizontally with additional HSMs operating in parallel; use a faster
(and presumably weaker) algorithm.
The fourth option is to perform signing using non-HSM equipment, which has
challenges of its own.
> Even without moving algorithm, Signing gets slower as a function of
> keysize as well as time to brute force. So, there is a loss of
> "volume" of signing events through the system overall. Time to resign
> zones can change. Maybe this alters some operational boundary limits?
> (from what I can see, 1024 -> 1280 would incur 5x slowdown. 1024-2048
> would be 10-20x slowdown. RSA to elliptic curve could be 50x or worse
> slowdown)
>
> If the case for "bigger" is weak, then if the consequences of bigger
> are operational risks, maybe bigger isn't better, if the TTL bound
> life, is less than the brute force risk?
>
> A totally fictitious example. but .. lets pretend somebody has locked
> in to a hardware TPM, and it simply won't do the recommended algorithm
> but would power on with 1024 until the cows come home? If the TTL was
> kept within bounds, if resign could be done in a 10 day cycle rather
> than a 20 day cycle (for instance) I don't see why the algorithm
> change is the best choice.
>
>
You are correct, and much depends on things like stability of the zone and
total zone size.
The ultimate limit is really the utilization level of the signing hardware.
Once the hardware is operating full-out constantly, it is only a matter of
time before the theoretical adversarial risk exceeds the zone operator's
risk tolerance.
If the hardware performance generally continues to improve along the
current exponential scale (e.g CPU and GPU performance), signing hardware
will eventually be obsolete and need replacing.
Brian
> On Fri, Oct 22, 2021 at 11:46 AM Brian Dickson
> <brian.peter.dickson at gmail.com> wrote:
> >
> >
> >
> > On Wed, Oct 20, 2021 at 10:22 AM Paul Hoffman <paul.hoffman at icann.org>
> wrote:
> >>
> >> On Oct 20, 2021, at 9:29 AM, Viktor Dukhovni <ietf-dane at dukhovni.org>
> wrote:
> >>
> >> > I'd like to encourage implementations to change the default RSA key
> size
> >> > for ZSKs from 1024 to 1280 (if sticking with RSA, or the user elects
> RSA).
> >>
> >> This misstates the value of breaking ZSKs. Once a KSK is broken, the
> attacker can impersonate the zone only as long as the impersonation is not
> noticed. Once it is noticed, any sane zone owner will immediately change
> the ZSK again, thus greatly limiting the time that the attacker has.
> >
> >
> > This presupposes what the ZSKs are signing, and what the attacker does
> while that ZSK has not been replaced.
> >
> > For example, if the zone in question is a TLD or eTLD, then the records
> signed by the ZSK would include almost exclusively DS records.
> > DS records do change occasionally, so noticing a changed DS with valid
> signature is unlikely for anyone other than the operator of the
> corresponding delegated zone.
> > An attacker using such a substituted DS record can basically spoof
> anything they want in the delegated zone, assuming they are in a position
> to do that spoofing.
> > And how long those results are cached is controlled only by the resolver
> implementation and operator configuration, and the attacker.
> >
> > So, the timing is not the duration until the attack is noticed
> (NOTICE_DELAY), it is the range MIN_TTL to MIN_TTL+NOTICE_DELAY (where
> MIN_TTL is min(configured_TTL_limit, attacker_supplied_TTL)).
> >
> > The ability of the operator of the delegated zone to intervene with the
> resolver operator is not predictable, as it depends on what relationship,
> if any, the two parties have, and how successful the delegated zone
> operator is in convincing the resolver operator that the cached records
> need to be purged.
> >
> > Stronger ZSKs at TLDs is warranted even if the incremental improvement
> is less than what cryptographers consider interesting, IMNSHO. It's not an
> all-or-nothing thing (jump by 32 bits or don't change), it's a question of
> what reasonable granularity should be considered in increments of bits for
> RSA keys. More of those increments is better, but at least 1 such increment
> should be strongly encouraged.
> >
> > I think Viktor's analysis justifies the suggestion of 256 bits (of RSA)
> as the granularity, and thus recommending whatever in the series 1280,
> 1576, 1832, 2048 the TLD operator is comfortable with, with recommendations
> against going too big (and thus tripping over the UDP-TCP boundary).
> >
> >>
> >> In summary, it is fine to propose that software default to issuing
> larger RSA keys for ZSKs, but not with an analysis that makes a lot of
> unstated guesses. Instead, it is fine to say "make them as large as
> possible without causing automatically needing TCP, and ECDSA P256 is a
> great choice at a much smaller key size".
> >
> >
> > I'm fine with adding those to the recommendations (i.e. good guidance
> for the rationale for picking ZSK size and/or algorithm), with the added
> emphasis on not doing nothing.
> >
> > Brian
> > _______________________________________________
> > dns-operations mailing list
> > dns-operations at lists.dns-oarc.net
> > https://lists.dns-oarc.net/mailman/listinfo/dns-operations
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.dns-oarc.net/pipermail/dns-operations/attachments/20211021/049fd4ea/attachment.html>
More information about the dns-operations
mailing list