[dns-operations] Why non-repeating transaction IDs?
fw at deneb.enyo.de
Fri Aug 3 17:07:45 UTC 2007
* Paul Vixie:
>> I see that people use lots of home-grown algorithms to get random, but
>> mostly non-repeating transaction IDs in their resolvers. I can't find the
>> rationale for this; a straightforward PRNG or stream cipher seems sufficient
>> for this task. Any pointers to RFCs or papers are appreciated.
> when i asked vernon schryver about this topic, he gave me the following. note
> that true randomness experts would say that anything in libc is predictable,
Yeah, once you've got that srandomdev, the remaining parts are easy.
Naturally, I recommend against using weak primitives such as random().
Using RC4 with a random key (and applying the usual safety measures)
is actually easier to code. So why do people who independently try to
improve implementations come up with different code?
The only thing that might require convoluted code is the desire for
non-repeating transaction IDs. With random IDs, you've got a
collision every few hundred packets. Why is this a problem? This is
not TCP, queries are idempotent.
> but for DNS purposes, it's a 16 bit field, so all values are "predictable".
It tends to make a difference if you need 3 instead of 30,000.
More information about the dns-operations