[dns-operations] DNS ANY record queries - Reflection Attacks
eosterweil at verisign.com
Tue Sep 11 22:21:56 UTC 2012
On Sep 11, 2012, at 5:00 PM, Vernon Schryver wrote:
>> From: Eric Osterweil <eosterweil at verisign.com>
>> So, I don't understand something... If you see a lot of identical
>> responses from an authority, could that not be because it is an authority
>> for those responses? How do you distinguish a netblock with multiple
>> resolvers, or anycast resolvers?
> The BIND RRL code is part of the resolver. It does not "see a lot of
> identical responses from an authority" except when it is the authority.
Fair enough, except I'm pretty sure some of the deployment being talked about (even in this thread) is at the authority (not the resolver)... Note the OP:
``We run a bunch of authoritative servers and have recently observed activity best described in a post we found here: https://isc.sans.edu/diary/DNS+ANY+Request+Cannon+-+Need+More+Packets/13261''
Clarify the comment?
>> Perhaps more directly, are you
>> dropping responses from legitimate clients and how do you feel about
>> them being collateral damage?
> Paul Vixie and I are not advocating DNS rate limiting in firewalls.
> We're talking about rate limiting in the hosts at the ends of the
Again, this thread started somewhere else. Clearly, I agree that people should be able to manage their own user experiences. ;)
>> So, every identical response either gets dropped or gets its TC bit set?
> No, every *excessive* identical response is either not sent (dropped)
> or a tiny TC=1 response is sent instead.
Wait, are we still talking about the resolver? This seems to indicated a different deployment model than your above comment (why would I send a TC bit to my stub)?
>>> A DNS client that retransmits N times to a DNS server that answers
>>> 50% with TC=1 of the time will get an answer to 1-(0.5)^N of its
>>> queries. For N=4, it will get a TC=1 answer 94% of the time.
>> Wait, I'm very confused... The above sounds like you respond to
>> 94% of the reflector attack queries (which furthers the attack).
> No, I was pointing out that P(R1&R2&R3&R4)=P(R1)*P(R2)*P(R3)*P(R4)
> Given a uniform drop probability of 50%, the probability that all 4
> responses to an initial request and its 3 retransmissions will dropped
> is 6%. (Or should that be N=5?--I always seem to be off by 1.
> In which case 97% of requests would be answered.)
1 - If you uniformly drop 50% of a 100x amplification attack, you are still reflecting 50x amplification, right?
2 - If you wait for (say) 4 responses, your stub (the client driving the upstream resolver) has almost certainly timed out, and the DDoS has succeeded, if I'm not mistaken, right?
>> Well, if doing something hurts the legitimate clients more than doing
>> nothing, I think you need to be upfront about that. I think that's
>> worse than doing nothing.
> That's like opposing mechanical spam filtering by pointing to mechanical
> false positives while ignoring the higher false positive rate of the
> otherwise inevitable purely manual filtering on subjects and senders.
No, this analogy would really only hold if my spam filter's false positive rate was incredibly high, or the vendor claimed they didn't have to quantify false positives as a failure and then claimed that senders should timeout and retransmit all legit email 4 times to get through with a 94% probability.
This is a tradeoff, so it's important (imho) to describe how much good is being done with how much not-good.
> If you do nothing, then legitimate clients will be denied all service
> by the firewall rules advocated here or by IP bandwidth rate limits
> at the source (DNS servers) and the DoS targets. Remember why it's
> called a DoS.
No, the OP was about amplification at an auth name server. If (for example) you work at a critical infrastructure provider, and you deny responses, then you need to be careful about dropping traffic. The generic spam analogy would be akin to gmail just dropping your email and calling it spam (without you being able to correct mistakes).
> You are saying that you would rather try to receive 1000 1500 Byte
> bogus DNS responses per second along with all your legitimate DNS
> responses that don't get dropped from router queues by that flood
> instead of 10 bogus responses and useful responses to 94% of your
Don't put words in my mouth. I'm saying that a cavalier description of rate limiting that falls back to position statements that we really just need clients to, ``implement BCP 38,'' rather than describing operational tradeoffs is not safe.
>> OK, but you've also almost certainly eliminated the legitimate
>> client's ability to query you for responses.
> That is simply false. When Paul Vixie wrote that the BIND RRL code
> is effective, he wasn't talking about theory or small scale tests. It
> has been in use on some major DNS servers for months. If there were
> enough collateral damage to talk about, someone would have complained.
Then it should be easy enough for someone to explain the above, no? Having deployed something does not mean that it was effective, and blocking traffic does not tell me how much legit traffic and how much attack traffic was blocked. I don't see why this is so hard, I just want to understand the assertion.
More information about the dns-operations