[dns-operations] Why would an MTA issue an ANY query instead of an MX query?
vjs at rhyolite.com
Sun Jun 10 22:36:15 UTC 2012
> From: Paul Vixie <paul at redbarn.org>
> To: sthaug at nethelp.no
> Cc: dns-operations at mail.dns-oarc.net
> > I'm afraid we may need more control. If my clients are generating a DDoS
> > attack at 20 responses per second, and I limit this to 5 per second -
> > the C&C can get the same effect by mobilizing four times as many clients
> > to do the job.
> no. the client ip is spoofed. the number of spoofers doesn't matter,
I'd say the same thing by pointing out the difference between DNS
query rate limiting and DNS response rate limiting. The DDoS attack
on ns0.rfc1035.com was an attack on the DNS server at ns0.rfc1035.com
that needed DNS query rate limiting. That limit on queries probably
must be upstream of the DNS server itself, lest the server's kernel
queues and network itself get clobbered. Except for mild attacks,
query rate limiting must be done with machinery that can dispose
of more UDP queries and TCP SYNs than any DNS server could handle.
I think that spells "firewall."
Attacks on DNS servers are increased by the bad guy mobilizing more
DNS response rate limiting is different kind of defense against a
different kind of attack. In that attack, an unknown number of DNS
clients forge the IP source address of the target and reflect packets
off DNS resolves. It is on those resolvers were response rate limiting
can be applied. Those resolvers don't care how many real clients are
forging the stream of requests.
Attacks on third parties using DNS reflections are increased by the
bad guy using sending the queries to more open DNS resolvers.
To keep bad guys from denying DNS service to the target, DNS response
rate limiting must be based on the query name and type as well as the
client IP address. Otherwise, a bad guy could keep all of your DNS
requests from resolving, which can be a more effective denial of service
attack than bandwidth hogging.
My hope and almost ambition for the code I've been working on is
find a default set of parameters response rate limiting parameters
to reduce the nuisance of open resolvers.
Note that response rate limiting can cause problems for local DNS
clients. DNS clients can make a lot of duplicate queries. Consider
browsers. Think about SMTP servers checking PTRs on SMTP clients,
validating envelope domain names, and checking DNSBLs.
If you dare, contemplate DNS clients beyond NAT boxes.
> > On my wishlist, in addition to rate limiting, is also:
> > - Some way of dynamically blackholing clients, based on one or more of
> > -- Rate limit exceeded
> > -- Asking the *same* question (with a large response) repeatedly
> > -- Asking a *specific* question (e.g. ANY isc.org|ripe.net)
> > -- Input from an external system, e.g. via rndc
I'd ask whether that wish list is not mostly a request for rndc access
to the existing BIND9 blacklist/blackhole directives.
And I'd ask if such controls would be used. The work required to
maintain manual real time blacklists is non-trivial. Very few
organizations are willing to pay the costs of non-trivial, dynamic
Finally, note that one can use RPZ to filter DNS requests and that
some of the available RPZ zones contain bad actors that might be
well be denied DNS responses.
Vernon Schryver vjs at rhyolite.com
More information about the dns-operations