[Collisions] "controlled interruption" - 127/8 versus RFC1918 space

David Conrad drc at virtualized.org
Fri Jan 10 21:41:10 UTC 2014


Joe,

On Jan 10, 2014, at 9:50 AM, Joe Abley <jabley at hopcount.ca> wrote:
> If I was to illustrate in more alarmist fashion, we hope that no embedded IP stacks within nuclear power stations and hospitals react badly to this approach (but we don't know).

Do you believe these stacks would react better and/or impose less risk if an IP address of a web page run by some third party is returned, resulting in traffic from the nuclear power station and/or hospital traversing the Internet in the clear?  

> My point really is that there is no safe address we can use, if we are looking for 100% safety.

If we are looking for 100% safety, we should terminate the new gTLD program, freeze the content of all existing zones (not just the root), and disband the IETF since making any sort of protocol modifications implies some risk, no?

> If we're not concerned about the 100%, then that's useful to say out loud.

I have never believed we are concerned about 100% safety.  I do not even believe 100% safety is possible.  What we're talking about is risk minimization and mitigation across an array of different risks associated with the delegation of top-level domains.

> For example, if we're only talking about names that are observed to leak significantly (and we're ignoring the enormous tail) then we're only really talking about HOME and CORP, and we can stop talking in general terms about all new gTLDs.

As has been stated in a number of places, quantity of queries to the root does not necessarily correlate with risk.  I don't believe we're only talking about HOME and CORP.

>> Yes, with the assumption that said attempts will fail quickly, resulting in action being taken by the folks behind the connection attempt to address the issue.
> I'm not sure that assumption is a good one. Here's an anecdote.
> [...]
> So a connectivity problem in Bill Manning's basement in El Segundo caused all DSL authentication in New Zealand to fail.

Did the ISPs that were reliant on Bill Manning's basement alter their configuration as a result of that failure?

> In this case you're talking about what you hope will be a rapid failure for a local destination that (you hope) is not configured. However, what studies exist to confirm that in most or all cases that failure will be rapid?

None.

> How many devices have local firewalls installed that didn't consider the possibility of 127.0.0.53?

Again, if such devices exist and they are at risk from a spurious query to the root returning 127/8, they are already vulnerable to a directed attack. Given the Internet as it exists today, the fact that such attacks do not appear to have occurred suggests to me that the risk here is quite low. YMMV.

> A timeout is a timeout, regardless of whether it's due to unavailable nameservers or a problem connecting to an unusual loopback-ish address.

A timeout is different than an immediate (well, latency to a root name server) NXDOMAIN.

> I'm not saying this can't work. I'm just saying there's potentially more fallout here than might appear, and that experimental data is better than assumptions. I can't think of convincing experiments, however, which just leads us with avoiding assumptions.

It's not clear to me how you can avoid assumptions when you start positing broken network stacks. People can break things in truly fascinating ways and the Internet is a big place.  However, given the Internet is a big place, lack of any evidence of the particular breakage you're expressing concern about might suggest that the risk is low.

> The suggestion that a mechanism like this one could be used to mitigate search-list problems with overloaded TLDs like MAIL, CORP or HOME by provisioning records in the root zone

I've not seen such a suggestion.

> We seem to be talking about changing the registry state machine such that there's an extra stage involved whereby DNS records other than those required for a simple delegation will be published for some period of time before they are replaced by the expected delegation.

Err, no. We're talking about what happens during the period between when a registry is delegated and General Availability (i.e. during the CA Revocation Period, sunrise, etc).  In the comments section of the DomainIncite post where this proposal was introduced, Chris Cowherd suggested in a DomainIncite post:

"This could be accomplished during the period between the normal IANA delegation process and GA for those TLDs not already in GA. And for those already in GA, for some reasonable finite period of time, such as 30 days. IANA could ensure that the wildcards are properly in place during their “tech-check” of the registry DNS servers."

Similarly Wayne MacLaurin suggested:

"If this approach is adopted, it should be implemented at delegation and run for at least 60 days or the end of Sunrise. Up until that point, there no SLDs (other than nic.) in the zone and if something was seriously broken, its less difficult to address than having to yank newly
active domains."

>> If there is a risk, it exists today and can be triggered remotely by miscreants in a variety of ways. The chain of events necessary for something bad to happen because 127.0.53.53 is returned in an A query seems to me to be particularly remote.
> Right. As I keep failing to explain clearly, I'm not concerned that this can't work. I'm concerned about the corner cases.

We're in a maze of corner cases, trying to establish which corners have the smaller land mines. As far as I can tell, none of the options are without potential corner cases, so we have to pick and choose which are the least risky.  I personally believe, with little empirical data, that the risk of some network stack blowing chunks because it gets back a 127/8 response is lower than the other risks related to new gTLD delegation.

>> Out of curiosity, what do you believe would be a solution that has a better cost/benefit ratio?
> 
> I have two.
> 
> 1. I think just delegating new domain names under new gTLDs in the conventional manner is easier to troubleshoot from the point of view of an enterprise that experiences a name collision than any intermediate attempt to try and be more helpful. It also has the advantage of the registry goalposts not lurching from side to side.

As mentioned above, I think you misunderstand where/when the proposal would be applied.

> 2. CORP and HOME are the problem children. Full-page ads in a set of international newspapers saying things like "If you use CORP and HOME inside your business network, you need to be aware of this, URL", repeat in all UN langauges, are likely to be more useful in reaching the target audience than relying on people noticing and understanding what 127.0.53.53 means.

These two aren't mutually exclusive.

> I don't think the fact that we are struggling to establish communications with people responsible for enterprise networks is a technical problem.

Neither do I, however our options to establish that communication would appear to be somewhat limited.

Regards,
-drc

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 495 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://lists.dns-oarc.net/pipermail/collisions/attachments/20140110/c4300c27/attachment-0001.pgp>


More information about the Collisions mailing list