Mike Hoskins (michoski)
michoski at cisco.com
Mon Dec 15 18:42:09 UTC 2014
I hesitate to get involved in a holy war...but really huge +1 to this.
You have diversity, in a managed way. It's not a silver bullet -- but
just like there have been times it made things worse, there have been
times when it made things better (buffer overflows that worked on one
architecture but not another is a common example where I've had personal
You can acknowledge things aren't a panacea, while still deriving some
benefits from them. Monitoring/analytics (intelligence) is key, so the
operator can intelligently control flows across their services based on
risks and observed threats.
From: Warren Kumari <warren at kumari.net>
Date: Monday, December 15, 2014 at 12:22 PM
To: David Conrad <drc at virtualized.org>
Cc: "dns-operations at lists.dns-oarc.net" <dns-operations at dns-oarc.net>
Subject: Re: [dns-operations] knot-dns
>.... or run a different load-balancing algorithm. There are multiple
>ways to do this, but having the load-balancer hash only on src ip and
>dst ip means that you should have the same client hitting the same
>Then again, it all depends on why you are running a load-balancer - a
>number of which become very sad with lots of short UDP sessions, and
>fall over way before a raw server. If you are "load-balancing" to
>optimize uptime, a nice options is:
>Each of ns[1-4].example.com is a different IP and is "load-balanced"
>behind a router. Each of the instances contains multiple machines,
>and each machine announces that fact that it is alive and answering
>queries by announcing itself into OSPF (with e.g ospfd and a tiny
>ns1 run BIND and NSD. The BIND boxes announce themselves into OSPF
>with a cost of 10, the NSD boxes announce themselves with a cost of
>ns2 run NSD and knot. The NSD boxes have a cost of 10, the knot a cost of
>ns3 run yadifa and BIND. The yadifa have a cost of 10, the BIND 100.
>ns4 run NSD and BIND. The NSD cost 10, the BIND 100.
>OSPF doesn't do equal cost, and so the "primary" name server software
>at each location gets all the traffic, unless it fails, at which time
>the backup takes over. There can be multiple of the same type machine
>at each location, routers are more than happy to ECMP down 16 paths
>with no issue.
>There is a small Python script that takes zone serving information and
>outputs the correct stanzas for each nameserver type, and
>[puppet|chef|ansible|salt|rsync|nfs|ssh] to propagate to all the
>Built basically this at multiple places. You need a few extra boxes,
>but they are a: cheap and b: used for other stuff when not primary.
>"A host is a host from coast to coast
>But no one will talk to a host that's close
>Unless of course the host that's close is busy, hung, or dead."
> - sung to the Mr Ed tune.
More information about the dns-operations