[dns-operations] summary of recent vulnerabilities in DNS security.

Vernon Schryver vjs at rhyolite.com
Tue Oct 22 23:03:08 UTC 2013


I'm puzzled by the explanation of Socket Overloading in 
https://sites.google.com/site/hayashulman/files/NIC-derandomisation.pdf
I understand it to say that Linux on a 3 GHz CPU receiving 25,000
packets/second (500 bytes @ 100 Mbit/sec) spends so much time in
interrupt code that low level packet buffers overflow.

That puzzles me for reasons that might be summarized by considering
my claim of 20 years ago that ttcp ran at wirespeed over FDDI with
only 40-60% of a 100 MHz CPU.
https://groups.google.com/forum/#!topic/comp.sys.sgi.hardware/S0ZFRpGMPWA
https://www.google.com/search?q=ttcp

Those tests used a 4 KByte MTU and so about 3K pps instead of 25K pps.
The FDDI firmware and driver avoided all interrupts when running at
speed, but I think even cheap modern PCI Ethernet cards have "interrupt
bursting." Reasonable network hardware interrupts the host only when
the input queue goes from empty to not empty or the output queue goes
below perhaps half full full, and then only interupts after a delay
equal to perhaps half a minimum sized packet on the medium.  I wouldn't
expect cheap PCI cards to be that reasonable, or have hacks such as
ring buffer with prime number lengths to avoid other interrupts.
Still, ...

IRIX did what I called "page flipping" and what most call "zero copy I/O"
for user/kernel-space copying, but modern CPUs are or can be screaming
monsters while copying bytes which should reduce that advantage.  It
would be irrelevant for packets dropped in the driver, but not if the
bottleneck is in user space such as overloaded DNS server.

That old ttcp number was for TCP instead of UDP, which would be an
advantage for modern Linux.

So I would have guessed, without having looked at Linux network
code for many years, that even Linux should be using less than 20%
of a 3 GHz CPU doing not only interrupts but all of UDP/IP.
  100MHz/3GHz * 60% * 25000 pps /3000 pps = 17%

Could the packet losses have been due to the system trying to send
lots of ICMP Port-Unreachables?  I have the confused impression that
Socket Overloading can involve flooding unrelated ports.

How was it confirmed that kernel interrupt handling was the cause
of the packet losses instead of the application (DNS server) getting
swamped and forcing the kernel to drop packets instead of putting
them into the application socket buffer?  Were giant application
socket buffers tried, perhaps with the Linux SO_RCVBUFFORCE?
(probably a 30 second change for BIND)

25K qps is not a big queryperf number, which is another reason why I
don't understand how only 25K UDP qps could swamp a Linux kernel.  Just
now the loopback smoke-test for RPZ for BIND 9.9.4 with the rpz2 patch
reported "24940 qps without RPZ" on a 2-core 2.4 GHz CPU running FreeBSD 9.0.

What about the claims of Gbit/sec transfer speeds with Linux?
https://www.google.com/search?q=linux+gigabit+ethernet+speed

I'm not questioning the reported measurements; they are what they are.
However, if they were due to application overload instead of interrupt
processing, then there might be defenses such as giant socket buffers.


Vernon Schryver    vjs at rhyolite.com



More information about the dns-operations mailing list