[dns-operations] Evaluating resolver performance
bert.hubert at netherlabs.nl
Tue Feb 3 19:53:55 UTC 2015
On Tue, Feb 03, 2015 at 05:54:59PM +0100, Marek Vavruša wrote:
> What I don't like is that it leaks messages to Internet instead of
> faking DNS hierarchy on a local interface, thus making the results
> unreliable. Is there anything else I'm missing?
A few points. Actual real world production performance of resolvers is
determined almost entirely by incorrectly configured domains, slowly
responding nameservers and resilience to repeated malicious queries.
So 'theoretical performance' on a test environment is nice, but not
necessarily reflective of what people can expect. Or, more bluntly, unless
your test setup abounds with broken loadbalancers, dangling delegations and
domains that depend on 'floating glue', it doesn't help.
For PowerDNS, for every commit we attempt to resolve 50000 common DNS names,
and declare failure unless a certain percentages resolves correctly, or if
it takes too long.
These results are remarkably stable, unless the DoS is strong on the
internet, in which case our regression tests become 'yellow' (ie, we don't
flunk the build, but we do flag things as unstable).
For testing, we use 'dnsbulktest' and 'dnsreplay' within the PowerDNS
source, as described on https://doc.powerdns.com/md/tools/analysis/
Especially dnsreplay is useful, as it replays actual user traffic, bugs, DoS
attacks and all. This allows us to test against active attacks and user
So my suggestion would be - let go of the pain of having 'unreliable' tests.
Even a not exactly reproducible test on the real internet is very useful.
We do realize that every PowerDNS commit launches several storms of internet
traffic, but we've not had any complaints yet.
> Thanks again,
> dns-operations mailing list
> dns-operations at lists.dns-oarc.net
> dns-jobs mailing list
More information about the dns-operations