[dns-operations] new public DNS service: 9.9.9.9

Joe Greco jgreco at ns.sol.net
Mon Nov 20 12:40:20 UTC 2017


On Sat, Nov 18, 2017 at 12:11:12AM -0800, Damian Menscher wrote:
> On Fri, Nov 17, 2017 at 10:41 PM, Paul Vixie <paul at redbarn.org> wrote:
> >
> > even though i believe quad9's published privacy policy, just as i believe
> > google's for 8.8.8.8 and cisco/umbrella's for opendns, i do not trust all
> > of the ISP's between me and them, and all of the telco's they buy service
> > from, not to data mine my queries.
> 
> 
> Your argument that you don't trust the ISPs between you and
> Google/OpenDNS/Quad9, and therefore run your own local recursive resolver,
> confuses me.  After all, your local recursive needs to query third-party
> authoritative servers anyway.
> 
> To convince yourself, answer these two questions:
>   - How many ISPs are between you and 8.8.8.8?  I'm on Comcast, and they
> have direct peering with Google, so the number is zero.
>   - How many ISPs are between you and the average authoritative DNS server
> you need to reach?  I'm guessing that number is non-zero.
> 
> Or did I misunderstand what you meant about the ISPs/telcos between you and
> the third-party rDNS providers?

One of the hazards that we've repeatedly fallen for over the years is
the trend towards these big monster centralized services.

This may have started with SMTP, where end users had a critical issue
where they wanted a stable address, but switching ISP's made that hard,
so free e-mail services took over.  Over time, we've evolved to the 
point where there are a relatively small number of massive-scale mail
companies who have created a Kafka-esque system of broken filtering and
rampant disregard for the crap they spew, while making it exceedingly
difficult for a small site to successfully join into the mail network.

As we become more dependent on services that are provided at the whim
of various service providers, I think it is valuable to consider the
downsides.  This includes the loss of local control over the policies
being implemented.  Once you outsource your DNS recursion to someone
else, that becomes a decision that's harder to reverse, because it is
wired in all over.

Not that it'd happen, but what if Google decided one day that Yahoo!
must die and set 8.8.{8.8,4.4} to block *.yahoo.com?  Sure, today's
policies and the current environment makes this unlikely, but what
about ten years from now?

I like the general idea behind 9.9.9.9, but I don't know that I want
to outsource recursion to a project that is dependent on being funded
in the future, and whose filtering is somewhat opaque.

I wouldn't mind running local recursers that blocked a good threat list 
if this could be set up and maintained in a reasonable fashion.  But
having another service pop up to do something that is so important to
the operation of the network, "just trust us," is pretty uncomfortable.

So a reasonable question might be, is there something that could be done
to make threat filtering by those of us who operate recursion servers
an easier thing to do?

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
"We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.



More information about the dns-operations mailing list