[dns-operations] "bad infosec economics " Re: <something paul wrote>

David Miller dmiller at tiggee.com
Tue Jun 12 20:25:05 UTC 2012

On 6/12/2012 12:34 PM, Edward Lewis wrote:
> At 14:18 +0000 6/10/12, Paul Vixie wrote:
>> thinking about or acting against ANY is bad infosec economics.
> This I agree with.  Here are some of my knee-jerk, anti-filtering
> thoughts:
> 1 - DNS providers are paid to answer questions, not drop traffic.

True, to a point.  DNS providers are paid to answer "good" questions. 

The very first response I get from a customer who receives a large query
bill generated from an attack is that they did not want us to answer
(i.e. they do not want to pay for) *those* queries.

> 2 - Rate limits that are not managed eventually become the reason why
> address blocks are contaminated.  Recall the 512 byte limit devices.
> 3 - Whenever I've considered limiting any pattern I think of a few
> other patterns that could be substituted in short order if its an APT.
> I agree that rate limiting is an effective short-term, immediate
> reaction to events. But once you leave that time horizon they become
> liabilities - "permanent fixes to temporary solutions."

Agreed.  All filters and/or limits need to maintained over time. 
However, general rate limits could be set so high that they will never
occur in "good" traffic and exceeding them is clearly "bad" traffic. 
The rate for these general limits would vary from infrastructure to
infrastructure - 1,000 qps, 10,000 qps, you choose qps (yes, that limit
will likely change over time).

> Another part of this discussion centered around determining the source
> of the offensive traffic.  Again, "bad infosec economics" comes here -
> while it is interesting to diagnose, rarely does the search for
> answers to this question lead to a permanent solution. We've
> collectively known about Dan Bernstein's use of t=ANY for a decade and
> we know he's reluctant to listen to calls for change nor make the
> change.  10+ years.  Nothing's changed - except the newest younger
> crowd learns about this old tale.

Nothing has changed and qmail's installed base remains about the only
reason that we haven't thrown ANY queries onto the refuse pile.

Since we can't disregard ANY queries, it would be just super if we could
stop throwing every new DNS feature into the zone apex, continually
bloating further the ANY query responses for the apex of domains.  Just
a thought.

> The suggestion to limit EDNS0 to "a smaller size" might be the first
> step to decent (but still suboptimal) improvement.  Guessing that the
> malicious data seeks two things - a large, valid and reputable chunk
> of data to throw at a victim and an reputable and capable address from
> which to throw it, perhaps at least limiting the data size in a way
> that does not cause choking to legitimate uses is a good thing.


> I've said in presentations that the same fertile ground that lets DNS
> be the beast that it is also enables DDoS (rooted in the nature of
> UDP).  The fact that we are still talking DDoS prevention many years
> after we started is empirical evidence that DDoS is a hard problem to
> solve.  Rate limiting is one technique, not a cure-all.  If it was,
> we'd not be talking about it in 2012.  So, use it with caution.

The only real solution is for bandwidth providers to filter garbage at
their edges.  Unfortunately, there is little/no economic incentive for
them to do this and (just like your 1. above) they posit that:
1. Bandwidth providers are paid to route packets, not drop traffic.

There is also, of course, little consensus on the definition of
"garbage".  However, if nothing foundational changes, then we will have
the exact same discussion in 2022.

> PS - One possibility, instead of simply not responding, send back
> rcode=REFUSED.

I agree with Tony Finch's reply - TRUNC is better than REFUSED


More information about the dns-operations mailing list