[dns-operations] latest bind, EDNS & TCP
Mark Andrews
marka at isc.org
Sun Oct 12 20:09:30 UTC 2014
In message <543A6FA5.1080609 at cdns.net>, Simon Munton writes:
> > BIND 9.10 starts at 512. If it gets TC=1 it will retry with TCP.
> > This establishes whether the server supports EDNS or not.
>
> Even if the EDNS data is included in the TC=1 reply?
Yes. Recursive servers don't have a long time to make all the
lookups. Switching to TCP is quicker than doing another level of
probing when it fails and you know the server is up. Most referrals
even when signed will still fit in 512 bytes.
> > Named will record the actual size of the UDP response it gets and
> > use that, provided it is > 512 as the minimum when it talks to the
> > server again.
>
> So, if a server did not include *any* DNSSEC proof, unless it was able
> to fit it *all* in, is it possible that bind could conclude that it can
> only receive small packets from that server (and fall back to TCP).
>
> Is that correct ?
Names records the size of the successful response to get a higher path
minimum.
> > Authoritative server operators can help by ensuring ...
>
> This is all useful advice - however, all just as true before the sudden
> increase in use of TCP - so not necessarily useful in trying to identify
> the cause of the sudden increase.
>
> As I have said before, I have *no* idea where the sudden increase in TCP
> usage is coming from, but as bind has changed its policy on its use of
> bufsize, it seems a *possibility*.
>
> A single carrier starting to mishandle UDP fragment seems just as
> likely, hence asking if anybody else is seeing this - although I'd hope
> carriers are more aware of the dangers in this area, now DNSSEC has been
> in the wild for some time - may be wishful thinking.
>
>
>
> After further investigating the example I posted, and a number of the
> other ones we are seeing in Vienna, it seems a lot of the TCP traffic is
> coming from name server owned by T-Mobile who seems to have deliberately
> limited their bufsize to 512 bytes - they all reply with
>
> ; EDNS: version: 0, flags:; udp: 512
>
> If this is the case, I would expect *all* owners of signed zones to be
> seeing the same behaviour we are seeing - from these servers.
>
> Is limiting the maximum bufsize in this way a new feature?
No. It has been in named for over a decade. You can set the advertised
buffer size in named.conf and named will work between that and 512 bytes
to find a size that works. You can even set it on a prefix basis using
server clauses.
> > "timeout" is never a correct response.
>
> Unfortunately, the most common cause of this will be in transit, so
> outside the knowledge or control of either DNS operator, e.g. incorrect
> filtering of UDP fragments, incorrect fragment handling or low packet
> size support on a DNS proxy (e.g. DSL router), etc.
The most common cause of timeouts is firewalls deciding to block
some {E}DNS feature. Some of Cloudflare's servers block queries
with "AD=1" sets (reported). DO and CD have also caused issues.
There are still firewalls that block EDNS queries.
The next most common cause is problems with the destination firewall
this includes filtering fragments.
Problems in transit are rare compared to all of these. Most problems
are at the end points
TLD operators also need to ensure that the load balancers in front
of the servers will handle fragmented responses. I report fragmentation
issues with one server to a TLD operator a couple of weeks back.
This is reported to be fixed but I havn't checked yet.
Mark
--
Mark Andrews, ISC
1 Seymour St., Dundas Valley, NSW 2117, Australia
PHONE: +61 2 9871 4742 INTERNET: marka at isc.org
More information about the dns-operations
mailing list