[dns-operations] summary of recent vulnerabilities in DNS security.
haya.shulman at gmail.com
Wed Oct 23 22:40:13 UTC 2013
> I'm puzzled by the explanation of Socket Overloading in
> I understand it to say that Linux on a 3 GHz CPU receiving 25,000
> packets/second (500 bytes @ 100 Mbit/sec) spends so much time in
> interrupt code that low level packet buffers overflow.
Just to clarify, the attacker ran (two to three sync-ed hosts, and the
burst was split among those hosts).
The packets rate is not the only factor, and actually burst concentration
is much more significant. Specifically, when the packets in the burst have
no (or almost no) interpacket delay, the impact is different; e.g., when
running the same evaluation with a single attacking host (even on same LAN)
no loss was incurred - even if the attacker transmitted constantly, since
both the attacking host, and the (store-and-foward) switch that the
attacker's host was connected to, also introduced delays between packets
(due to their own interrupts and delays), thus `spreading` the burst and
reducing its impact.
I will be happy to have your thoughts on this additional piece of info,
i.e., the significance of burst volume concentration (=no, or low,
interpacket delay in arriving burst), that may have been not clear from the
writeup in the paper (I will clarify this in the paper too - thanks).
> That puzzles me for reasons that might be summarized by considering
> my claim of 20 years ago that ttcp ran at wirespeed over FDDI with
> only 40-60% of a 100 MHz CPU.
> Those tests used a 4 KByte MTU and so about 3K pps instead of 25K pps.
> The FDDI firmware and driver avoided all interrupts when running at
> speed, but I think even cheap modern PCI Ethernet cards have "interrupt
> bursting." Reasonable network hardware interrupts the host only when
> the input queue goes from empty to not empty or the output queue goes
> below perhaps half full full, and then only interupts after a delay
> equal to perhaps half a minimum sized packet on the medium. I wouldn't
> expect cheap PCI cards to be that reasonable, or have hacks such as
> ring buffer with prime number lengths to avoid other interrupts.
> Still, ...
> IRIX did what I called "page flipping" and what most call "zero copy I/O"
> for user/kernel-space copying, but modern CPUs are or can be screaming
> monsters while copying bytes which should reduce that advantage. It
> would be irrelevant for packets dropped in the driver, but not if the
> bottleneck is in user space such as overloaded DNS server.
> That old ttcp number was for TCP instead of UDP, which would be an
> advantage for modern Linux.
> So I would have guessed, without having looked at Linux network
> code for many years, that even Linux should be using less than 20%
> of a 3 GHz CPU doing not only interrupts but all of UDP/IP.
Thanks for this input, and for the reference.
> 100MHz/3GHz * 60% * 25000 pps /3000 pps = 17%
> Could the packet losses have been due to the system trying to send
> lots of ICMP Port-Unreachables? I have the confused impression that
> Socket Overloading can involve flooding unrelated ports.
But, why would ICMP errors cause loss?
Inbound packets have higher priority over outbound packets.
> How was it confirmed that kernel interrupt handling was the cause
> of the packet losses instead of the application (DNS server) getting
> swamped and forcing the kernel to drop packets instead of putting
> them into the application socket buffer? Were giant application
> socket buffers tried, perhaps with the Linux SO_RCVBUFFORCE?
> (probably a 30 second change for BIND)
This a good question. So, this evaluation is based on the following
observation: when flooding closed ports, or other ports (not the ones on
which the resolver expects to receive the response) - no loss was incurred,
but all connections experience an additional latency; alternately, when
flooding the correct port - the response was lost, and the resolver would
retransmit the request after a timeout.
> 25K qps is not a big queryperf number, which is another reason why I
> don't understand how only 25K UDP qps could swamp a Linux kernel. Just
> now the loopback smoke-test for RPZ for BIND 9.9.4 with the rpz2 patch
> reported "24940 qps without RPZ" on a 2-core 2.4 GHz CPU running FreeBSD
> What about the claims of Gbit/sec transfer speeds with Linux?
> I'm not questioning the reported measurements; they are what they are.
> However, if they were due to application overload instead of interrupt
> processing, then there might be defenses such as giant socket buffers.
I just want to clarify, that I appreciate your questions/comments, and
questioning the results is of course an important contribution to the
research. Maybe the writeup requires clarification, e.g., I will check if
the text clearly explains the setup and evaluation, and maybe the
evaluation results were misinterpreted, I do not rule that out, I could
have missed out on something; either way, I will need to check both
So, I really appreciate the questions and data that you provided.
I used the default buffers in OS and resolver. So, you think that it could
be that the loss was on the application layer?...
I am not at liberty to disclose location or vendor, but I'm aware of linux
> boxes handling 20k PPS mixed UDP/TCP at an average 2% CPU. They aren't even
> modern boxes although a bit newer than the dual core that Vernon mentions
> below. In short, I agree completely with everything Vernon said here. I
> suspect outdated information or some other factor was involved.
Just to reinforce Vernon and Jo's points, we have DNS servers running
> Linux at ARIN pushing 25~30k packets per second. Overall CPU
> utilization (across all cores) is under 10%. Interrupt rates tend to be
> around 15~18k per second.
Thanks for your input. One of the main factors of the attack is `burst
concentration`. This is specifically impacted by the inter-packet delays of
inbound traffic, and the amortised rate is not the factor. In particular,
even when sending the burst at a higher rate, from a single host, the
impact was much less significant, e.g., even no loss; the cause for this
are the delays added by the interrupts of the sending system, and the
routers enroute (that further add their own interrupts). Thus, the impact
of the resulting volume is negligible, and does not result in packets'
loss. But, if the volume is split between a number of synced hosts, the
impact is significantly enhanced, and typically results in packets' loss.
What are your thoughts on this? Does this clarify the results?
On Wed, Oct 23, 2013 at 10:56 PM, Haya Shulman <haya.shulman at gmail.com>wrote:
> I see I'm stupid for not seeing that in the first message. I did search
>> for 'http' but somehow didn't see the URL. But why not simply repeat
>> the URL for people like me? Why not the URL of the paper at the
>> beginning instead of a list of papers?
> I did not realise that this was the problem, I thought that for some
> reason you could not download from my site, indeed, using the url would
> have been more convenient, sorry.
> By searching for "DNSSEC" with my PDF viewer, I found what I consider
>> too few references to the effectiveness of DNSSEC against the attacks.
>> There is nothing about DNSSEC in the abstract, a list of DNSSEC problems
>> early, and a DNSSEC recommendation in the conclusion that reads to me
>> like a concession to a referee. Others will disagree.
> Ok, thanks for this comment, please clarify which paper you are referring
> to, and I will check if appropriate references could be added.
> - forwarding to third party resolvers.
>> I agree so strongly that feels like a straw man. I think
>> forwarding to third pary resolvers is an intolerable and
>> unnecessary privacy and security hole. Others disagree.
>> - other mistakes
>> that I think are even worse than forwarders.
>> - DNSSEC
>> Perhaps that will be denied, but I challenge others to read those
>> papers with their litanies of DNSSEC issues and get an impression
>> of DNSSEC other than "sow's ear sold as silk." That was right
>> for DNSSEC in the past. Maybe it will be right forever. I hope
>> not, but only years will tell. As far as I can tell from a quick
>> reading, the DNSSEC issues are valid, but are sometimes backward
>> looking, perhaps due to publication delays. For example, default
>> verifying now in server software and verifying by resolvers such
>> as 188.8.131.52 should help the verifying situation.
> Agreed and noted, thank you.
> p.s. Can you please cc me when sending responses related to me? Thank you
> in advance!
> Best Regards,
> Haya Shulman
> Technische Universität Darmstadt
> FB Informatik/EC SPRIDE
> Mornewegstr. 30
> 64293 Darmstadt
> Tel. +49 6151 16-75540
Technische Universität Darmstadt
FB Informatik/EC SPRIDE
Tel. +49 6151 16-75540
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the dns-operations