[dns-operations] Why non-repeating transaction IDs?

Florian Weimer fw at deneb.enyo.de
Sun Aug 5 08:15:44 UTC 2007

* Paul Vixie:

>> Naturally, I recommend against using weak primitives such as random().
>> Using RC4 with a random key (and applying the usual safety measures)
>> is actually easier to code.  So why do people who independently try to
>> improve implementations come up with different code?
> i can't think of anything easier to code than a call to random(), and,
> as long as i reseed often enough, i can't think of anything better than
> random() whose betterness would be apparent in a 16-bit number system.

*sigh* Experience has taught me not argue about crypto, so I'll stop

> "in use" means there is an outward bound query still in flight,
> which hasn't timed out or been answered yet.  although the full
> uniqueness tuple includes the remote server and i could reuse a
> <SADDR,SPORT,QID> when talking to a different remote server, i
> don't.  but in practice i've hardly ever measured a QID collision
> even under high stress benchmarks.

What badness happens when there is a collision?  Why do you need to
avoid it?

>> > but for DNS purposes, it's a 16 bit field, so all values are
>> > "predictable".
>> It tends to make a difference if you need 3 instead of 30,000.

"3 attempts instead of 30,000 for poisoning the cache", sorry.  But
perhaps I should run a few experiments first, to see if this really
makes a difference.

More information about the dns-operations mailing list