[dns-operations] Quick anycast primer
Steve Gibbard
scg at gibbard.org
Fri Jul 14 19:18:01 UTC 2006
On Fri, 14 Jul 2006, Jim Reid wrote:
> On Jul 14, 2006, at 08:41, Steve Gibbard wrote:
>
>> In a typical anycast cloud, there are several servers in several locations
>> sharing a service address. When all servers in the cloud are up, and
>> routing to all of them is working properly, queries sent to that service
>> address are responded to by the topologically closest server.
>
> ....modulo peering agreements/policies.
>
> Once upon a time Steve, some of my zones were served on Nominum's anycast
> service, GNS. [You might have had a hand in setting that up with Bill
> Woodcock.] When I queried the anycast address from home, the packets went to
> LINX where they waved at the Nominum servers on their way to the anycast
> instance in Palo Alto. And back again. This was because the ISP I used in
> those days didn't peer with Nominum in LINX. Their upstream provider did that
> at PAIX.
>
> I suppose it all depends on what's meant by "topologically closest".
I was not involved in the Nominum anycast service. It was before my time.
John Payne has done a very nice job of addressing the topology issues in
this thread, but as a follow-up:
Packets going a very long distance when they could go a much shorter
distance is a topology issue that needs to be taken into account whether
building a network of anycast servers, or building a wide area network to
host unicast servers. The problem is often even worse when dealing with
unicast. Traffic to a unicast server in a network with the topology you
describe might have to go from London to Palo Alto and then back to London
to find its destination. In the anycast example, it found an acceptable
destination once it got to Palo Alto.
Fortunately, this is a solvable problem. It's therefore a problem that's
seen much less often, at least in North American and Europe, than it was
ten years ago. It's largely a matter of making sure you have local
interconnections where you need them and being consistent about who you
connect to across your coverage area.
I've been trying to avoid getting into implementation-specific details
here since they're easy to get bogged down in, so the following is just an
example: What we've done with the PCH anycast network has been to attempt
to peer with our peers at all overlapping locations. We spread our
"global nodes" (nodes with transit, to connect to those who won't peer)
around the world with the same two global networks as our transit
providers in all four places. This allows best exit routing to work as
well as it can, given the topologies and peering policies of the rest of
the networks in the world. This is the same interconnection strategy that
big international networks carrying mostly unicast traffic use. It's not
perfect, but it works pretty well.
-Steve
More information about the dns-operations
mailing list