[dns-operations] DNS pre-resolution

Joe Greco jgreco at ns.sol.net
Sat Dec 5 12:59:49 UTC 2009


> I wonder what you guys think of this concept?
> http://www.dalkescientific.com/writings/diary/archive/2009/12/04/dns_preresolution.html
> 
> If this has been discussed and I missed it, I apologize.

http://queue.acm.org/detail.cfm?id=1647302

Google does a great job spinning this as being all about the user
experience.  I'm going to start off by saying that yes, there are
some definite benefits.

Your article above is essentially correct in the problems it has
spotted.  I don't really see the use of "DNS bugs" in web pages as
being useful; we have well-established technology for doing web
bugs in web pages, and since those deal with a web server, and can
do higher-level things such as cookie tracking, and of course a
web server can trivially access databases, etc., I don't expect 
that "DNS bugs" in web pages will be as major a threat as web bugs.

Yet we shouldn't minimize that possibility.  The potential for
additional data disclosure is nontrivial.  DNS lookups on mouseover?
What if I generate the entire page as transparent links, one per word
or something like that, so that the browser is doing lookups whereever
the mouse happens to be?  Maybe there's some useful or interesting
data to be extracted from where the user's mouse movements...

But this is dns-operations.  My big question is, how much of an impact
does this have on our local recursers?  We already resolve a ton of
addresses when we visit the average web page, with links to ad services
and image hosters and tracking bugs - the parts of the web page that
must be resolved in order to display it properly (think of it as the
"incoming" portion).  Now we're going to preresolve all the "outgoing"
possibilities as well?  Great way to immediately double or triple (etc)
the burden on our resolvers.

% fetch -o - http://www.cnn.com | tr '<' '\012' | grep '^a.*href="http://' | sed 's,.*http://,,;s:["/?].*::' | sort | uniq | wc -l

26

% fetch -o - http://www.cnn.com | tr '<' '\012' | grep '^img.*src="http://' | sed 's,.*http://,,;s:["/?].*::' | sort | uniq | wc -l 

2

That's not promising.  I had to look up 3 names to pull up www.cnn.com
(which sounds low, and excludes things from css/js/etc, but could be
generally accurate) and with Chrome, now we're going to be looking up
almost an order of magnitude more?

Oh, wait, Google to the rescue, Google DNS ...  interesting, that.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
"We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.



More information about the dns-operations mailing list