[dns-operations] Why non-repeating transaction IDs?
paul at vix.com
Fri Aug 3 18:15:15 UTC 2007
> Naturally, I recommend against using weak primitives such as random().
> Using RC4 with a random key (and applying the usual safety measures)
> is actually easier to code. So why do people who independently try to
> improve implementations come up with different code?
i can't think of anything easier to code than a call to random(), and,
as long as i reseed often enough, i can't think of anything better than
random() whose betterness would be apparent in a 16-bit number system.
> The only thing that might require convoluted code is the desire for
> non-repeating transaction IDs. With random IDs, you've got a collision
> every few hundred packets. ...
my higher level implementation is to have 16 pools of 64K QID's and to
use the my_random() function shown here earlier as a way to pick first a
4-bit pool and then within that pool a 16-bit candidate. if the candidate
is already in use, i throw it away and use another one. "in use" means
there is an outward bound query still in flight, which hasn't timed out or
been answered yet. although the full uniqueness tuple includes the remote
server and i could reuse a <SADDR,SPORT,QID> when talking to a different
remote server, i don't. but in practice i've hardly ever measured a QID
collision even under high stress benchmarks.
> > but for DNS purposes, it's a 16 bit field, so all values are
> > "predictable".
> It tends to make a difference if you need 3 instead of 30,000.
i'm not sure what you mean here. number of upstream queries in flight
equates roughly to number of state blobs i'm carrying in the full resolver,
and while i've seen 30000 (actually i've seen 100000) in stress benchmarks
i hardly ever see more than a few hundred per million client stub resolvers.
More information about the dns-operations