[dns-operations] DNS Traffic Archive Protocol
bedrich.kosata at nic.cz
Thu Dec 2 16:11:00 UTC 2010
On 12/02/2010 02:55 PM, Tony Finch wrote:
> On Thu, 2 Dec 2010, Bedrich Kosata wrote:
In general, always when I introduced a new method for making the result
smaller, I compared sizes of the older and newer version of the format
both in an uncompressed and compressed (gzip, bzip2) form. When the
difference in the compressed form was too small, I usually did not
accept the new method. By doing this, I introduced only those changes
that work *in addition* to a general purpose compression, rather than
>> - save only relevant data, remove details of lower network layers (store
>> all data in the same way regardless of IP version or transport protocol).
This is something general purpose compression cannot do - unless it's a
lossy compression :)
>> - store data as efficiently as possible (use single bits or portions of
>> a byte where possible, such as for IP version, transport protocol, etc.).
This works in some ways similarly to generic compression.
>> - combine DNS queries and responses together to remove redundancy.
This is something that generic compression would be able to do, but only
on the data level, not on the semantic level. Thus doing this not only
saves space, but also simplifies later processing.
>> - store repetitive data only when they change (server IP address, server
>> port and RR class are good candidates for this)
This might be possible by general purpose compression, but probably not
>> - store only data important for the task at hand, i.e., make certain
>> data only an optional part of the output. This way each user could
>> fine-tune the format to suit his needs. A good candidate for this is the
>> content of responses, which makes for the largest part of the data
>> (especially with the introduction of DNSSEC).
Again, this is something generic compression would not be able to do.
> How does this compare to a general purpose compression algorithm?
More information about the dns-operations