[Collisions] "controlled interruption" - 127/8 versus RFC1918 space

Joe Abley jabley at hopcount.ca
Fri Jan 10 17:50:09 UTC 2014

On 2014-01-10, at 11:30, David Conrad <drc at virtualized.org> wrote:

> On Jan 10, 2014, at 6:36 AM, Joe Abley <jabley at hopcount.ca> wrote:
>> what hosts, when presented with a destination address of, might send traffic to the network, despite the requirements of RFC 1700? I would hope the answer is "none". But I have seen strange things in the world.
> Assuming there are hosts that are sufficiently broken to try send out a packet to (none of the system to which I have access do), what risks do you see? If there is a risk, what would stop miscreants from triggering that badness today?

The basic risk that this proposal seeks to mitigate is that sensitive traffic will leak towards the Internet as a result of a previously non-existant name turning into a real name. We are in slightly unchartered waters with the assumption that traffic aimed at will stay local. Quite possibly comfort can be gained that the number of hosts that might behave contrary to RFC1700 are small, but since there has been no measurement, this is just a guess.

If I was to illustrate in more alarmist fashion, we hope that no embedded IP stacks within nuclear power stations and hospitals react badly to this approach (but we don't know).

> I suppose if is considered too risky, we could instead use (since we're all told that doesn't work, right? :)).

My point really is that there is no safe address we can use, if we are looking for 100% safety.

If we're not concerned about the 100%, then that's useful to say out loud. For example, if we're only talking about names that are observed to leak significantly (and we're ignoring the enormous tail) then we're only really talking about HOME and CORP, and we can stop talking in general terms about all new gTLDs.

>>> Now, with that assumption, what traffic are you talking about?  The connection attempt by the application to
>> Yeah. So, this proposal is identical to the normal state of things in many respects, but it attempts to trigger local traffic rather than (potentially) traffic across the Internet in the event that an internal name starts to exist in the public namespace.
> Yes, with the assumption that said attempts will fail quickly, resulting in action being taken by the folks behind the connection attempt to address the issue.

I'm not sure that assumption is a good one. Here's an anecdote.

Back in the late 1990s, pre-AS112, the authoritative servers for 10.IN-ADDR.ARPA and friends disappeared from Bill Manning's basement for a while. Clients that had previously received a swift NXDOMAIN were suddenly getting timeouts. RADIUS servers in New Zealand (specifically dealing with PPPoA/DSL authentication for Telecom's DSL product) started failing because the timeout for resolving a name was greater than the timeout for receiving a response.

So a connectivity problem in Bill Manning's basement in El Segundo caused all DSL authentication in New Zealand to fail.

In this case you're talking about what you hope will be a rapid failure for a local destination that (you hope) is not configured. However, what studies exist to confirm that in most or all cases that failure will be rapid? How many devices have local firewalls installed that didn't consider the possibility of

A timeout is a timeout, regardless of whether it's due to unavailable nameservers or a problem connecting to an unusual loopback-ish address.

I'm not saying this can't work. I'm just saying there's potentially more fallout here than might appear, and that experimental data is better than assumptions. I can't think of convincing experiments, however, which just leads us with avoiding assumptions.

>>> Err, what legal-economic difficulties?
>> The idea of changing the requirements for the root zone partners or new gTLD registry operators in how they perform a delegation,
> As far as I am aware, no one is proposing changing requirements for the root zone partners.

Yet :-)

The suggestion that a mechanism like this one could be used to mitigate search-list problems with overloaded TLDs like MAIL, CORP or HOME by provisioning records in the root zone should be stamped on for the good of all humanity (or else we should re-focus our thinking on how to deal with the eventual heat death of the universe, because surely that will happen first.)

> WRT the new gTLD registry operators, I believe the goal here is to come up with a way to address name collision concerns so they can move towards regular operations and not have to worry about block lists, etc.

I can appreciate the desire to avoid block lists.

>> e.g. for registry operators who have already passed pre-delegation testing and are perhaps already running some new gTLD registries in production.
> I'm not positive (or authoritative in any way) but I don't believe folks who are already running in production will be forced to do anything they don't want to.  As I said, I believe for them, the idea is to allow them to move more quickly to normal operation.

So what happens to a registry that is currently publishing new gTLD zones for delegated strings, but is also expected to accommodate approved but as-yet-undelegated strings?

>>> Could you be explicit in the headaches you see?
>> It seems like a lot of work for lawyers and developers,
> Not being a lawyer, I'll not comment on that bit. What development do you see being necessary?

It's the changes to the registry state machine that worry me.

We seem to be talking about changing the registry state machine such that there's an extra stage involved whereby DNS records other than those required for a simple delegation will be published for some period of time before they are replaced by the expected delegation. I suspect that in at least some cases this is going to require code changes, helpdesk/NOC changes, monitoring and measurement changes, documentation changes, re-education for sales staff, etc. None of this is impossible, but it seems to me it has a distinctly non-zero cost.

>> it has the small chance of causing unexpected side-effects due to the way the slightly unusual address chosen,
> If there is a risk, it exists today and can be triggered remotely by miscreants in a variety of ways. The chain of events necessary for something bad to happen because is returned in an A query seems to me to be particularly remote.

Right. As I keep failing to explain clearly, I'm not concerned that this can't work. I'm concerned about the corner cases.

>> I realise I'm coming across as very negative about the whole idea. This is not a religious crusade against loopback abuse, and as I mentioned earlier I like the general line of thinking. But I don't think the benefits of this approach outweigh the costs.
> Out of curiosity, what do you believe would be a solution that has a better cost/benefit ratio?

I have two.

1. I think just delegating new domain names under new gTLDs in the conventional manner is easier to troubleshoot from the point of view of an enterprise that experiences a name collision than any intermediate attempt to try and be more helpful. It also has the advantage of the registry goalposts not lurching from side to side.

2. CORP and HOME are the problem children. Full-page ads in a set of international newspapers saying things like "If you use CORP and HOME inside your business network, you need to be aware of this, URL", repeat in all UN langauges, are likely to be more useful in reaching the target audience than relying on people noticing and understanding what means.

I don't think the fact that we are struggling to establish communications with people responsible for enterprise networks is a technical problem.

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 203 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <http://lists.dns-oarc.net/pipermail/collisions/attachments/20140110/46834213/attachment.pgp>

More information about the Collisions mailing list