[dns-operations] DoS with amplification: yet another funny Unix script
Vernon Schryver
vjs at rhyolite.com
Tue Sep 11 20:22:53 UTC 2012
> From: =?ISO-8859-1?Q?Colm_MacC=E1rthaigh?= <colm at stdlib.net>
> > Any firewall rule that doesn't compute DNS responses about as good as a DNS server is simplisitic.
>
> With the greatest of respect; that thinking is itself simplistic.
> Where I work we concentrate on writing very good firewalls. Sometimes
> these rules even have to parse DNS, just as the DNS server must ...
That "just as the DCC server must" is false. For example, I doubt
that those firewalls do enough DNS computing to recognize and limit a
stream of responses generated from a single wildcard before the responses
have been transmitted by the DNS server. They probably doesn't even
recognize pernicious but simple NXDOMAIN cases. They might but probably
don't notice that a stream of responses are approximately identical
referrals from authoritative servers or approximately identical recursion
from recursive servers. I think DNS rate limiting must do all of that
while not slowing other high volume traffic.
> An in-the-path firewall actually has access to more data than the DNS
> server alone does. For example, it can build up a simple profile of
> expectation values for IP TTLs on a per-source network basis.
That is also overstated. In practice DNS servers don't do such things,
but they could. (Yes, you can get UDP/IP headers in a modern BSD
UNIX daemon.) I doubt that the computing costs of tracking IP TTLs
would be worthwhile for a DNS server with high legitimate load. I
wonder if administrative costs such as dealing with the false positives
due to route flapping or re-homing would be worthwhile even in a
firewall. Remember the best of all firewalls that is advetised on
http://www.ranum.com/security/computer_security/papers/a1-firewall/
> It can
> use all IP data for that profile; DNS, HTTP, whatever it's seen. Those
> expectation values can then be used to detect and reject spoofed
> packets, in combination with other statistical scores. That's just one
> simple example - there are many more.
Firewalls have good and valuable uses in lower layer defenses. However,
firewalls are usually weak crutches for applications. They are very
popular for quick plugs in application holes, because badly designed
and written applications are so popular and it's a lot easier to kludge
something into a firewall than fix the typical lame application code
implementing a worse than stupid de facto standard protocol.
Even good protocols have weaknesses. For example, every protocol and
especially those using UDP must have basic features including:
- optional authentication & authorization
- exponential or steeper backoff for retries
- rate limiting on requests from evil as well as innocently broken clients
The original DNS lacked all of those.
> The other big reason is pragmatism; unix daemons using recv() are
> extremely limited in the rate at which they can process packets. far
> far higher throughput is possible via other techniques that involve
> handling batches of packets at much smaller timescales. A nice benefit
> of the approach is that it frees higher-level development teams from
> having to worry about low-level mitigation, and that the work is
> re-usable across many products. During real attacks, if a packet makes
> it to the dns server, the game is already lost.
I disagree with most of that. Since it is about general philosophies
and what is theoretically possible instead of operational issues or
even DNS theories, I'll resist the impulse to pick it apart.
Vernon Schryver vjs at rhyolite.com
More information about the dns-operations
mailing list