<div dir="ltr"><div class="gmail_default" style="font-size:small"><br></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Oct 3, 2017 at 8:35 AM, Suzanne Woolf <span dir="ltr"><<a href="mailto:suzworldwide@gmail.com" target="_blank">suzworldwide@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class=""><br>
> On Oct 3, 2017, at 4:10 AM, Daniel Karrenberg <<a href="mailto:dfk@ripe.net">dfk@ripe.net</a>> wrote:<br>
><br>
><br>
><br>
> On 03/10/2017 00:01, Stephane Bortzmeyer wrote:<br>
>> (because they would distribute a<br>
>> compilation of NS and DS records in the software).<br>
><br>
> That's what I consider too complicated in the paper. Just distribute a<br>
> complete copy of the root zone and include code that fetches it from a<br>
> choice of sources using arbitrary protocols.<br>
<br>
</span>One of the reasons why I wanted to see the root zone signed and DNSSEC validation code widely available is that even if it weren’t in everyday, in-band use, it could be used to validate root zone data. Root servers went from being a protocol element (source of truth for bootstrapping) to being an optimization (get the data there if you didn’t have a better way, but you can decide whether to believe data you have without caring how you got it).<br>
<br>
This is simple partly because of some assumptions we can make about the root zone— that it’s relatively small and relatively static— but it would have to be a lot larger and more dynamic before there was a problem with this approach even with relatively limited resources in the resolver</blockquote><div><br></div><div><div class="gmail_default" style="font-size:small">I did some modelling looking at the possible outcomes. The root infrastructure is currently outside ICANN control which is one of the few practical checks on the organization. There is also the peculiarity that we have this massive 'kick me' target at the center of the Internet architecture. To first order, none of the root traffic is legitimate. It is all abuse. Legitimate traffic is noise on the noise.</div><div class="gmail_default" style="font-size:small"><br></div><div class="gmail_default" style="font-size:small">For the root server to get larger, ICANN would have to reduce the cost of TLDs dramatically and contract with a registrar to operate it. They would likely maximize their rents at between $500 and $1000 (every product I have been involved with at a unit price point less than that made loadsa money, every product costing more cost more to make than it brought in).</div><div class="gmail_default" style="font-size:small"><br></div><div class="gmail_default" style="font-size:small">The problem I see with the analysis here is that either the root zone is a few thousand domains or it is tens of millions. There is no middle ground. Open up registrations for $500 TLDs and the sales on the first day alone would be $1 billion. Register a few thousand domains and people might think that they have meaning. Register a few million and it becomes clear that microsoft.poop is not the Redmond club.</div><br></div><div><br></div><div><div class="gmail_default" style="font-size:small">Fortunately, there are infrastructures that could easily support those capacities. BitCoin is a very silly currency scheme but the architecture is easily capable of supporting a root zone of a billion names and no, it is not necessary to do proof of work to run a linked notary log. </div><div class="gmail_default" style="font-size:small"><br></div><div class="gmail_default" style="font-size:small">All we need to do to make this happen is to redefine DNS registration so that the cryptography becomes primary. If you can tell people 'oops, you lost your private key, your name went bye-bye' you can eliminate a vast amount of cost from the systems.</div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class=""><br>
> Will this split the root? It would have a number of years ago. But today<br>
> we are comfortable with DNSSEC and can validate delegations. So a<br>
> resolver does not need to trust the source of the root zone because it<br>
> can validate each delegation in that zone.<br>
<br>
</span>Exactly. If you want a split root, you can have it, but then, you always could. Simplicity of resolution isn’t the only reason why people want a unified, consistent root zone; in fact I suspect it really goes the other way around— people want a unified, consistent root available (even when it’s not the only thing they want, as in the case of split-horizon or mixed DNS/mDNS environments or "special use" names like .onion), so implementers and operators will continue to make it easy for them to have one.</blockquote><div><br></div><div><div class="gmail_default" style="font-size:small">I really wish that the IAB had focused some attention on issues such as split DNS and NAT rather than allowing anyone trying to discuss them seriously being attacked as spawn of the devil.</div><div class="gmail_default" style="font-size:small"><br></div><div class="gmail_default" style="font-size:small">We really need a new DNS client protocol to replace recursive resolver that has authentication and authorization built in from the start.</div></div></div></div></div>