[dns-operations] summary of recent vulnerabilities in DNS security.

Haya Shulman haya.shulman at gmail.com
Mon Oct 21 10:20:42 UTC 2013


>
> your text above shows a drastic misunderstanding of both dns and dnssec.
> a correctly functioning recursive name server will not promote
> additional or authority data to answers. poison glue in a cache can
> cause poison referrals, or poisoned iterations, but not poisoned answers
> given to applications who called gethostbyname(). dnssec was crafted
> with this principle firmly in mind, and the lack of signatures over glue
> is quite deliberate -- not an oversight -- not a weakness.


Poisoning resolvers' caches is one issue, and what the resolvers return to
applications is a different matter.
IMHO `cache poisoning` is accepting and caching spoofed records. Cache
poisoning is a building block that can be applied for more complex attacks,
e.g., to redirect applications to malicious servers or for DoS attacks.

As I wrote in an earlier email, poisoning glue records can result in denial
of service attacks on resolvers, since they _cache_ those spoofed records
(although they do not return them to applications). You have not addressed
this concern in your response, the issue you discuss is different and it is
what applications receive from resolvers. The vulnerability that I pointed
out, is not related to returning the spoofed glue records to applications.
The question whether DoS (as a result of cache poisoning) is a weakeness is
a different issue, I simply wrote that we identified this new
vulnerability, that even validating resolvers can cache spoofed glue from
attacker, and then remain stuck with those records (which may result in
degradation/denial of service).

thanks for clarifying that. i cannot credit your work in the section of
> my article where i wrote about fragmentation, because you were not the
> discoverer. in 2008, during the 'summer of fear' inspired by dan
> kaminsky's bug, i was a personal information hub among a lot of the dns
> industry. at one point florian weimer of BFK called me with a concern
> about fragmentation related attacks. i brought kaminsky into the
> conversation and the three of us brainstormed on and off for a week or
> so as to how to use fragments in a way that could cause dnssec data to
> be accepted as cryptoauthentic. we eventually gave up, alas, without
> publishing our original concerns, our various theories as to how it
> might be done, and why each such theory was unworkable.
> i was happy to cite your work in my references section because your
> explaination is clear and cogent, but since you were not the discoverer,
> i won't be crediting you as such.


Florian wrote in his response to me on this mailing list that he believed
that the attack was not feasible since he did not succeed at deploying it.
He identified that there was a vulnerability but did not provide a way to
exploit it.
For instance, Bernstein identified vulnerability with predictable ports
long before Kaminsky attacks, yet you still call that attack `Kaminsky
attack`.
The point is: claiming that P not equals NP won't credit you the result, if
someone else does the proof.
Unless I am misunderstading, there was no published vulnerability prior to
our result. Please clarify if I am wrong.


your answer is evasive and nonresponsive, and i beg you to please try
> again, remembering that the vulnerabilities you are reporting and the
> workarounds you're recommending will be judged according to engineering
> economics. if we assume that dnssec is practical on a wide enough scale
> that it could prevent the vulnerabilities you are reporting on, then
> your work is of mainly academic interest. as vernon said earlier today,
> none of the vulnerabilities you are reporting on have been seen in use.
> i also agree with vernon's assessment that none of them will ever be
> seen in use.


Even if they are of academic interest only, I still hope that the Internet
community can learn from them, and have an option to protect themselves.
Regarding applicability: initially there were claims that this attack was
not feasible in lab setting (btw none with clear explaination why it is not
feasible). I am glad that now that other groups are/have also validated
them this has changed.
Once a vulnerability is found, it will eventually be exploited, and the
vulnerabilities that we found allow stealth attacks - so I think claiming
that they are not launched in the wild is not based on a solid proof. BTW,
I will appreciate it if you could clarify why you think they are not
applicable and cannot be launched in the wild.

As part of the research measurements that I do currently, I am running
evaluations (against my own domain of course), and there are vulnerable
networks (there are also networks that are not vulnerable to fragmentation
attacks - e.g., since they block fragmented responses).

that's too many "e.g.'s" and external references for me to follow. each
> fragmentary concept you've listed above strikes me as a nonsequitur for
> source port randomization. can you dumb it down for me, please?


I think you asked me to provide few examples of significance of port
randomisation...
This year there were a number of injection attacks against TCP exploiting
port randomisation algorithms recommended in [RFC6056]. Once port is known,
TCP sequence number does not pose a significant challenge (although it is
32 bits, it is incremented sequentially within the connection and there are
techniques to predict it) Port randomisation would prevent injections into
TCP.

By exploiting priority that network adapters assign to incoming packets,
and the fact that ports are predictable, attackers can launch a number of
attacks (these vulnerabilities are not related to DNS cache poisoning and
so apply even if DNSSEC is fully deployed). The idea is that when a packet
arrives to an open port, it is passed by the kernel to OS processing (i.e.,
goes into protocol buffer); this incurs additional processing (latency),
and also fills the buffer. Attackers can send short (low volume) bursts of
traffic, exploiting predictable ports, for a number of attacks (staying
well below the typical IDS alert limits).
For instance, name server pinning, identifying victim instances on cloud,
derandomisation of communication over TOR. There are limitations to these
attacks, but IMHO even if there are only few networks to which these
attacks apply - these are still attacls. Port randomisation would prevent
these attacks of course since the attacker would not know which .
Port randomisation was also proposed as a countermeasure against DoS
attacks (e.g., see here Denial of service protection with beaver).

Please clarify why you think that port randomisation cannot prevent the
attacks described above.

Bernstein identified preditable ports to be vulnerable long ago, it is
surprising to me, that after so many years, the community is still not
convinced that port randomisation is signfiicant.
Furthermore, if port randomisation is not an issue why standardise
[RFC6056]? Why set up DNS checkers? If current port randomisation
algorithms are vulnerable - why not fix?



On Mon, Oct 21, 2013 at 1:53 AM, Haya Shulman <haya.shulman at gmail.com>wrote:

>  i interpreted your answer to my question as "no", since every
>
> counter-example you cited was a case where dnssec was used improperly.
> most importantly, the lack of signed delegations and signed glue is by
> design, and is not a weakness in dnssec, since the only remaining
> vulnerability is denial of service, of which there are many other (and
>
> easier) methods.
>>
>
> I understood that you asked two questions:
> (1) by this, do you mean that you have found a fragmentation based attack
> that works against DNSSEC?
> (2) by this, do you mean that if DNSSEC is widely deployed, your other
> recommendations are unnecessary?
>
>
> --
>
> Correct me if I am wrong, but I understood
>
>
>
>
> that in (1) you meant partial deployment of DNSSEC, since later in (2) you emphasised the `full deployment`.
>
> As I wrote, even if DNSSEC is fully deployed and validated, denial/degradation of service attacks, via cache poisoning (of records in referral responses), are still feasible - since the resolver caches spoofed records. DNSSEC was designed to prevent cache poisoning.
>
>
>
>
>
>
>
>
>
>  i'd have to read your published work before i could cite it. can you
>
> tell me where to find it online, outside any paywall or other
> restrictions? note that i'll be happy to respond, with citations, since
>
> your work is so topical.
>>
>
> This is the work that we shared with you last year, when we contacted you
> regarding help coordinating disclosure of the fragmentation-based
> vulnerability. You also cite it at the end of the article, but it is not
> mentioned explicitly in the fragmentation section. The publication version
> of this work is on my site.
>
>
> i believe that if we can't make a significant difference in the
>
> resiliency and quality of core internet infrastructure after 16 years,
> then we wasted our time. and i know that if five more years wasn't
> enough, then fifty years would also not be enough. as an industry we
> must at some point either declare victory and stop creating lower
> quality counter-measures which add complexity, or we must declare
> failure and stop expecting dnssec to help with any problems we might
> discover in the existing system.
>
>
> we can't realistically or credibly have it both ways.
>>
>
> I beg to differ, I think DNSSEC is already a success. Maybe, had DNSSEC
> been immediately fully adopted as soon as it was standardised, in 1997, it
> could have, by now, been adandoned due to interoperability, functionality
> and security problems that would have emerged (nothing specific of DNSSEC
> but just the same for any new mechanism).
> Since changing the Internet and adapting it for DNSSEC is not a realistic
> option, DNSSEC had to be adapted. DNSSEC has already substantially changed
> over the years, during its incremental deployment.
>
> IMHO deploying additional countermeasures, like port randomisation,  in
> tandem with DNSSEC, in the meanwhile, does not introduce a security or
> functionality problem.
>
>
> i'd like to hear more about this. at the moment i have no picture in my
>
> head of "not only cache poisoning" when i think of the prevention
>
>  offered by source port randomization.
>>
>
> Port randomisation can be a useful coutermeasure against other attacks as
> well, e.g., the ability to predict ephemeral ports can be used for a range
> of attacks, e.g., name server pinning (not only for poisoning, e.g., for
> covert communication), low rate attacks against clients/resolvers,
> breaching instances' isolation on cloud platforms, injection of content
> into TCP connections (was applied by Watson VU415294 against BGP, but can
> also be used against web traffic), deanonymisation of communication over
> TOR. Some of the papers are already on my site.
>
>
>>
>>
>
>
>
>
> On Sun, Oct 20, 2013 at 10:01 PM, Haya Shulman <haya.shulman at gmail.com>wrote:
>
>> Sorry for delay, I was omw to a different continent.
>> Please see response below.
>>
>>
>> On Sat, Oct 19, 2013 at 9:21 PM, Paul Vixie <paul at redbarn.org> wrote:
>>
>>> Haya Shulman wrote:
>>>
>>> You are absolutely right, thanks for pointing this out.
>>>
>>>
>>> thanks for your kind words, but, we are still not communicating reliably
>>> here. see below.
>>>
>>>
>>>  DNSSEC is the best solution to these (and other) vulnerabilities and
>>> efforts should be focused on its (correct) adoption (see challenges here:
>>> http://eprint.iacr.org/2013/254).
>>> However, since partial DNSSEC deployment may introduce new
>>> vulnerabilities, e.g., fragmentation-based attacks, the recommendations,
>>> that I wrote in an earlier email, can be adopted in the short term to
>>> prevent attacks till DNSSEC is fully deployed.
>>>
>>>
>>> by this, do you mean that you have found a fragmentation based attack
>>> that works against DNSSEC?
>>>
>>> One of the factors, causing fragmentation, is signed responses (from
>> zones that adopted DNSSEC). Signed responses can be abused for DNS cache
>> poisoning in the following scenarios: (1) when resolvers cannot establish a
>> chain-of-trust to the target zone (very common), or (2) when resolvers do
>> not perform `strict validation` of DNSSEC. As we point out in our work,
>> many resolvers currently support such mode (some implicitly, others
>> explicitly, e.g., Unbound), i.e., signal support of DNSSEC, however accept
>> and cache spoofed responses (or, e.g., responses with missing or expired
>> keys/signatures).
>> According to different studies, it is commonly accepted that only about
>> 3% of the resolvers perform validation. One of the reasons for support of
>> permissive DNSSEC validation is interoperability problems, i.e.,
>> clients/networks may be rendered without DNS functionality (i.e., no
>> Internet connectivity for applications) if resolvers insist on strict
>> DNSSEC validation, and e.g., discard responses that are not properly signed
>> (i.e., missing signatures).
>>
>> Our attacks apply to such resolvers. Furthermore, many zones are
>> misconfigured, e.g., the parent zone may serve an NSEC (or NSEC3) in its
>> referal responses, while the child is signed (e.g., this was the case with
>> MIL TLD).
>>
>>  by this, do you mean that if DNSSEC is widely deployed, your other
>>> recommendations are unnecessary?
>>>
>>
>> Some of our recommendations are still relevant even if DNSSEC is widely
>> deployed. We showed attacks that apply to properly signed zones and
>> strictly validating resolvers. Since referral responses are not signed, the
>> attacker can inject spoofed records (e.g., A records in glue) which will be
>> accepted by the resolvers. Such cache poisoning can be used for denial (or
>> degradation) of service attacks - a strictly validating resolver will not
>> accept unsigned responses from the attacker and will be stuck with the
>> malicious cached name server records (unless the resolver goes back to the
>> parent zone again - however such behaviour is not a security measure and
>> should not be relied upon).
>>
>> Furthermore, in the proxy-behind-upstream setting, even when DNSSEC is
>> supported by all zones and is validated by the upstream forwarder, but not
>> by the proxy, the proxy can be attacked. Ideally validation should be at
>> the end hosts - we are not there yet with DNSSEC.
>>
>>
>>> in your next message you wrote:
>>>
>>> Haya Shulman wrote:
>>>
>>> ..., the conclusion from our results (and mentioned in all our papers on
>>> DNS security) is to deploy DNSSEC (fully and correctly). We are proponents
>>> of cryptographic defenses, and I think that DNSSEC is the most suitable
>>> (proposed and standardised) mechanism to protect DNS against cache
>>> poisoning. Deployment of new Internet mechanisms is always challenging (and
>>> the same applies to DNSSEC). Therefore, we recommend short term
>>> countermeasures (against vulnerabilities that we found) and also
>>> investigate mechanisms to facilitate deployment of DNSSEC.
>>>
>>>
>>> in 2008, we undertook the short term (five years now) countermeasure of
>>> source port randomization, in order to give us time to deploy DNSSEC. if
>>> five years made no difference, and if more short term countermeasures are
>>> required, then will another five years be enough? perhaps ten years?
>>> exactly how long is a "short term" expected to be?
>>>
>>> for more information, see:
>>>
>>>
>>> http://www.circleid.com/posts/20130913_on_the_time_value_of_security_features_in_dns/
>>>
>>
>> Thanks, you summarised this very nicely. I'd like to bring it to your
>> attention that, in contrast to other sections, you did not cite our work
>> explicitly, in a section where you describe our fragmentation based attacks
>> (please add it).
>> My response to your question is the following: DNSSEC is a new mechanism,
>> crucial for long term (future) security of the Internet. The concern that
>> you are raising applies to other new mechanisms as well, e.g., BGP security
>> and even IPv6, and is not specific to DNSSEC. Deploying new mechanisms in
>> the Internet has always been a challenge, and the mechanisms may go through
>> a number of adaptations during the incremental deployment phases, and
>> intermediate transition mechanisms may be designed. I believe that five
>> years is not a significant time frame in terms of the future of the
>> Internet. So, IMHO it may be the case that further countermeasures may be
>> required.
>>
>> BTW, port randomisation prevents a number of attacks (not only cache
>> poisoning) and so is useful even when DNSSEC is fully deployed and
>> validated.
>>
>>>
>>>
>>> vixie
>>>
>>
>>
>> Best Regards,
>>
>> --
>>
>> Haya Shulman
>>
>> Technische Universität Darmstadt****
>>
>> FB Informatik/EC SPRIDE****
>>
>> Morewegstr. 30****
>>
>> 64293 Darmstadt****
>>
>> Tel. +49 6151 16-75540****
>>
>> www.ec-spride.de
>>
>
>
>
> --
>
> Haya Shulman
>
> Technische Universität Darmstadt****
>
> FB Informatik/EC SPRIDE****
>
> Morewegstr. 30****
>
> 64293 Darmstadt****
>
> Tel. +49 6151 16-75540****
>
> www.ec-spride.de
>



-- 

Haya Shulman

Technische Universität Darmstadt****

FB Informatik/EC SPRIDE****

Morewegstr. 30****

64293 Darmstadt****

Tel. +49 6151 16-75540****

www.ec-spride.de
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.dns-oarc.net/pipermail/dns-operations/attachments/20131021/8edf2e43/attachment.html>


More information about the dns-operations mailing list