[dns-operations] Pinging the root name servers to check my connectivity?
vjs at rhyolite.com
Wed Sep 5 14:44:34 UTC 2012
> From: Stephane Bortzmeyer <bortzmeyer at nic.fr>
> To: dns-operations at mail.dns-oarc.net
> Configuring a small network, I had the problem to test if the Internet
> connectivity is working [side note: so I can use the result in the
> test in the "parents" directive of Nagios/Icinga, to avoid alarms for
> every target when the outside link is simply down]. The problem is to
> find suitable targets for testing "the Internet".
Why must one use a relatively small number of distant beacons for all
sites? Why poke distant anchors when almost all "The Internet is down"
complaints have local causes, and when not local, are out of the control
and even the ken almost everyone complaining?
My systems poke the far sides of my routers, services that I own
and operate on other networks, and service providers that receive
my money for more substantial services.
> Some persons suggested me to use "facebook.com" (if it is down, the
> Internet is useless, anyway), some suggested 188.8.131.52 (always up and
> fast) and some suggested to ping the root name servers.
Some months ago after working on some rate limiting code, I tired of
seeing the same referer strings poking the the same bogus URLs in my
HTTP server error logs. So I hacked something that generates firewall
drop rules after about a dozen hits for the same page per day by
apparently innocent HTTP clients and immediately for clients with what
I consider bad referer strings.
Watching that mechanism led me to discover that a bunch of people all
over the world were using my puny, obscure HTTP servers for "Internet
down" beacons. Most hit obvious pages at most a few dozen times per
day. The budding commercial service (?) "DinoPing" was hitting
from a bunch of clients every few seconds each. Whoever set them
up overlooked the irony of stealing my resources to abuse a web
page of spammer reasons why their spam is not spam. That irony
made me see that the unsolicited beacon/anchor idea is yet another
"doesn't scale" and "tragedy of the commons" notion like spam.
> But I wonder what would happen if every small network with an OpenWRT
> router and Nagios starts pinging them every minute. Is it a reasonable
> use? Do the root name servers operators have an opinion about that? Is
> there a better alternative?
That other rate limiting code that I had been working on was for DNS.
I eat my own dog food, and so it's in my own DNS servers. Most of the
dropped DNS requests in the logs are for stupid reverse DNS scanners,
but some are repeated hits that could be "Internet up" checks. That
code has been touted as a good idea for roots and might make a root a
poor "Internet up" beacon. I've heard that 184.108.40.206 is not a useful
DNS DoS tool, perhaps because Google, like any competent, well known
provider, must know about rate limiting.
Every reasonable protocol must have several features. One is
exponential or steeper retry back off in clients when things fail.
Another is rate limiting in servers to deal with bad clients that
fail to back off.
> [You have probably seen this project, which is partially related:
> <https://labs.ripe.net/Members/dfk/ripe-atlas-anchors>. A case where
> many small boxes testing an unwilling service created problems:
There have been many similar stories over the years, including equipment
vendors that DDoS'ed themselves instead of third parties.
Vernon Schryver vjs at rhyolite.com
More information about the dns-operations