[dns-operations] Evaluating resolver performance
marek.vavrusa at nic.cz
Wed Feb 4 13:59:58 UTC 2015
On 3 February 2015 at 20:53, bert hubert <bert.hubert at netherlabs.nl> wrote:
> On Tue, Feb 03, 2015 at 05:54:59PM +0100, Marek Vavruša wrote:
>> What I don't like is that it leaks messages to Internet instead of
>> faking DNS hierarchy on a local interface, thus making the results
>> unreliable. Is there anything else I'm missing?
> Hi Marek,
> A few points. Actual real world production performance of resolvers is
> determined almost entirely by incorrectly configured domains, slowly
> responding nameservers and resilience to repeated malicious queries.
> So 'theoretical performance' on a test environment is nice, but not
> necessarily reflective of what people can expect. Or, more bluntly, unless
> your test setup abounds with broken loadbalancers, dangling delegations and
> domains that depend on 'floating glue', it doesn't help.
Oh sure, I do want to test both, but right now I want to test the
possible throughput, and
then compare how the server copes with mishaps on the way.
I do already test such cr*p like that (it's part of the integration
tests after all) by intercepting
calls and providing responses from a scenario. This gives me a
reproducible test result each time,
and you can simulate pretty much everything from broken loadbalancers
to delays, missing/extra records etc.
> For PowerDNS, for every commit we attempt to resolve 50000 common DNS names,
> and declare failure unless a certain percentages resolves correctly, or if
> it takes too long.
> These results are remarkably stable, unless the DoS is strong on the
> internet, in which case our regression tests become 'yellow' (ie, we don't
> flunk the build, but we do flag things as unstable).
Okay, this may work, but I would hate digging into the cause when the
regression tests go yellow,
that sounds like a serious CSI work.
> For testing, we use 'dnsbulktest' and 'dnsreplay' within the PowerDNS
> source, as described on https://doc.powerdns.com/md/tools/analysis/
> Especially dnsreplay is useful, as it replays actual user traffic, bugs, DoS
> attacks and all. This allows us to test against active attacks and user
This I do as well, well maybe not the DoS because I don't have real
data, but the test sets should also cover
various malicious traffic.
> So my suggestion would be - let go of the pain of having 'unreliable' tests.
> Even a not exactly reproducible test on the real internet is very useful.
I agree, they're useful, but not "good enough" (for me). I'm not
trying to open the "regression testing" Pandora box,
I'm interested purely in attainable resolution performance (when
cached and uncached, but controlled authoritative speed),
not for marketing or self-indulgence, but simply to identify potential
hiccups and slowdowns since the process differs from authoritative.
> We do realize that every PowerDNS commit launches several storms of internet
> traffic, but we've not had any complaints yet.
Maybe it needs a little bit more cowbell.
Thanks for the insights!
>> Thanks again,
>> dns-operations mailing list
>> dns-operations at lists.dns-oarc.net
>> dns-jobs mailing list
More information about the dns-operations