[dns-operations] DNS Flag Day 2020 will become effective on 2020-10-01

Viktor Dukhovni ietf-dane at dukhovni.org
Fri Sep 11 08:29:22 UTC 2020


On Fri, Sep 11, 2020 at 09:38:30AM +0200, Petr Špaček wrote:

> > 1232 is a cargo-cult number. we must not revere as holy those things which fall out of the sky.
> 
> I disagree. That number is based on real-world experiance of today's
> DNS resolver vendors - based on their experience with un/reliability
> of real configurations.
> 
> Later on research
> https://indico.dns-oarc.net/event/36/contributions/776/attachments/754/1277/DefragDNS-Axel_Koolhaas-Tjeerd_Slokker.pdf
> shown that the estimate based on vendor's experience was pretty good.

Paul is not arguing against avoiding fragmentation, IIRC his name is on
a draft recommending fragmentation avoidance.  So I think the issue is
really about which numbers to go with.

While 1232 is in the ballpark, it may be too conservative, the case for
1232 rather than perhaps say 1400 didn't look that compelling.  For most
users the larger number is also fine, and sometimes even avoids (notably
rare) problems where a larger value works, but the smaller does not.

> Is lowering failure rate roughly by 0.8 % for IPv4 and by 0.33 % for
> IPv6 significant or not?  That's matter for each DNS vendor to decide
> because in the end it is the vendors who have to support the software
> and deal with all the obscure failure reports.

It is difficult to predict a trend in typical DNS response sizes, there
are multiple forces pulling in opposing directions:

    1.  DNSSEC adoption is growing, leading to larger responses.

    2.  Algorithm 13 (and just starting 15) adoption is growing,
        lowering packet sizes for many signed zones.

    3.  The TLS folks are planning to put all sorts of new data
        into DNS with SVCB an similar records, that is notitceably
        larger than typical "A" record payloads, and will be queried
        often (web browser traffic).

    4.  There is some movement of end-user DNS traffic to DoT/DoH,
        (not universally agreed to be a good thing).

So today's measurements are just point in time observations, and may not
be as good tomorrow.  So one could take the view that moving from
today's typical 4096-byte EDNS buffer to 1400 first to see how that
plays out could be a reasonable first step.

Or one might want to make the problem go away, even if one might end
up overshooting the mark.  I guess the "Flag day" target is closer
to the second scenario.  We might not get to raise the buffer sizes
as easily later than it would be to lower them again if it proves
necessary.

So I'm tempted to be cautious in the direction of avoiding overshoot,
because then there'd still be a signal that a further correction is
required.  With the overshoot, we'd just be paying an efficiency/
latency cost, but the signal would be much weaker...

-- 
    Viktor.



More information about the dns-operations mailing list