[dns-operations] Force TCP for external quereis to Open Resolvers?

Vernon Schryver vjs at rhyolite.com
Sun Mar 31 16:09:41 UTC 2013

> > Only the DNS people think that. The HTTP people are used to many TCP
> > connections to manage and do not think it is impossible.

> So we could abandon DNS/UDP and move exclusively to DNS/TCP?

No one said that it is "impossible" to handle lots of DNS/TCP connections.

It is a simple, unavoidable fact that TCP is far more expensive than
than UDP not only in bandwidth, latency, and CPU cycles but also memory.
I spent years whacking on network code at a vendor once known for high
network performance to improve HTTP hit numbers as well as UDP and TCP
bandwidth numbers.  There are many things that you can do to speed up
DNS/TCP, but DNS/UDP will always be a *lot* cheaper.  Switching from
DNS/UDP to DNS/TCP requires more memory, CPU cycles, and bandwidth.
That's obviously not "impossible," but it's also not free.

If you could change the 21 million open resolvers and for crazy reasons
wanted to keep them open, there are cheaper ways to make them useless
for reflection attacks than the TC=1 hack.  But if you could change
them, you would close them for simple hygene and so not care about
DNS/TCP, T/TCP, DNS cookies, or anything else.

In the real world, the only hopes for fixing the 21 million open
resolvers are 
  - protecting them with BCP 38 (faint)
  - blacklisting them at enough authoritative servers and so forcing
     their owners to wake up and do something (also faint).
  - firewalls at ISPs filtering incoming UDP/53 (I'm not holding my breath,
     since that's similar to the work of BCP 38)
  - scanning for them and nagging their owners with unsolicited bulk
      email or spam (hopeless as demonstrated with SMTP relays)
  - years and years and years and years of patience


} From: Jim Reid <jim at rfc1035.com>

} In this case, DDoS attackers would get those truncated responses
} sent to their victims. OK, they lose the amplification factor but
} they still get to flood the victim(s) with unsolicited traffic. If
} that lost payload matters to the attacker, they can just ramp up
} the size of their botnet or the number of reflecting name servers
} to compensate:

Without amplification by reflection DNS servers, the bad guys can
deliver more bits/sec at their targets by sending directly to the
targets.  Bouncing bits off mirrors that don't amplify results
in fewer bits at the targets as some packets are inevitably lost.
What's the profit for the bad guy in spending 10 bps of botnet
bandwidth to reflect 9 bps at the target?

Bad guys that send from a few sources instead of a botnet might hope
to hide behind DNS refelections, but to hit a target with 300 Gbps
they'd need to send more than 300 Gbps from those few sources.  Tools
for detecting and tracing and then terminating such large streams exist
and are being improved.

} >> I expect TCP to an anycast resolver -- say -- will prove
} >> tricky for long-lived connections.
} > 
} > Which long-lived DNS/TCP connections are those?
} I was thinking of the use case where an application's resolver
} opens a TCP connection and assumes it stays open until the application
} goes away: eg the resolver in a web browser opening a connection
} to and shoving all its DNS lookups down that until the web
} session ends some hours later.

Let's accept the unsupported assumption there are any lived DNS/TCP
connections in the real Internet.  (AXFR and IXFR are irrelevant here.)
Many things break long lived TCP connections.  If the client software
is not lame, stupid, and written by idiots, it does the obvious,
standard, and trivial.  When write(), send(), sendmsg(), or whatever
reports that the connection died, reasonable TCP client software makes
a new connection.  For HTTP, SMTP, and other applications reuse TCP
connections to save the CPU cycles, bandwidth, and latency of the 3-way
handshake and application authentication, this is not theoretical.

Vernon Schryver    vjs at rhyolite.com

More information about the dns-operations mailing list