[dns-operations] DDoS botnet behaviour

Vernon Schryver vjs at rhyolite.com
Mon Jun 11 14:25:39 UTC 2012


> From: Tony Finch <dot at dotat.at>
> To: Vernon Schryver <vjs at rhyolite.com>
> cc: dns-operations at mail.dns-oarc.net

> The reason I'm basing my work on a Bloom filter is to avoid any per-client
> scaling costs. There's a fixed per-packet overhead, a fixed memory cost
> (which should be scaled with the server's overall load), and a fairly
> cheap periodic cleaning task. No dynamic memory allocation.

How many hash functions are you using, what are they, and how do you
know that they are sufficiently independent to give a tolerable false
positive rate without using as much RAM as a single classic hash table?

I hate dynamic memory allocation, and so it doesn't happen in my code
after it reaches steady state.  The operator can use an optional
parameter to set initial sizes to minimize the cold start problem.

What is your estimate for the memory required for a simple hash
based scheme?  Memory size even on the busiest servers was a primary
design goal for my code and I was told to handle 100K qps.  Since
another goal is to stop dropping responses the instant the forged
requests cease to avoid being a DNS Dos for the target, the rate
must be computed with only very recent experience (which eliminates
the ancient BSD alpha decay function).  That means that one doesn't
need more than a (very few seconds)*(100K qps)*(memory per response).

I don't think much of timers, and so currently my code has none.
My rate counters are what might be called self-clocking.  I think there
will eventually be a timer for logging, which should save a lot of
cycles when logging is used and cost nothing when logging is not used.


> Your operational criticisms of the probabilistic approach are quite
> correct. It may also turn out to cost too much to get an acceptably low
> false positive rate.

Do you believe that single set of Bloom filter parameters can serve
an interesting class of installations without using more RAM than a
classic hash function?  To understate it--I don't.

How does one learn the false positive rate on an operational system
without running a deterministic rate limit system in parallel?
If you can make the false positive monitor work well enough to be
used everwhere it must be used including at 100K qps, what's the
point of the Bloom filter?


> But, it might be worth putting a smallish Bloom filter in front of an
> accurate traffic accounting system, so that the server only needs to spend
> resources tracking the heaviest users, along the lines described in
> http://pages.cs.wisc.edu/~estan/publications/elephantsandmice.pdf

Bloom filters are a neat idea for the right jobs.  I also think
that the 2 or 3 (not to mention more) hash lookups in a Bloom filter
would use far more CPU cycles per query than needed for an accurate
accounting system.  When not logging and after reaching steady
state, my code only hashes the request, changes pointers at most 3
linked lists, increments a counter, and tells the caller in client.c
or query.c to continue or drop.  Isn't that about what each of what
2 or more Bloom filter would hashes do?

Bloom filters combine several poor hash functions with a single,
extremely small (in context) bit array to give answers that are
imperfect but amazingly good.  That's cool, but not a mandate for
replacing all other hashes, trees, tries, etc.


Vernon Schryver    vjs at rhyolite.com



More information about the dns-operations mailing list