[dns-operations] DNS server benchmarking sanity check
Robert Edmonds
edmonds at mycre.ws
Mon Aug 15 20:12:16 UTC 2016
Jared Mauch wrote:
> 50Gb/s for TCP ~6Gb/s for UDP. I’ve been frustrated by these defaults
> in Linux that result in such a performance difference as the outcome.
>
> eg:
>
> puck:~$ iperf -c localhost
> ------------------------------------------------------------
> Client connecting to localhost, TCP port 5001
> TCP window size: 2.50 MByte (default)
> ------------------------------------------------------------
> [ 3] local 127.0.0.1 port 60088 connected with 127.0.0.1 port 5001
> [ ID] Interval Transfer Bandwidth
> [ 3] 0.0-10.0 sec 58.2 GBytes 50.0 Gbits/sec
> puck:~$ iperf -c localhost -u -b 50000m
> ------------------------------------------------------------
> Client connecting to localhost, UDP port 5001
> Sending 1470 byte datagrams, IPG target: 0.24 us (kalman adjust)
> UDP buffer size: 208 KByte (default)
> ------------------------------------------------------------
> [ 3] local 127.0.0.1 port 43066 connected with 127.0.0.1 port 5001
> [ ID] Interval Transfer Bandwidth
> [ 3] 0.0-10.0 sec 6.78 GBytes 5.82 Gbits/sec
> [ 3] Sent 4950578 datagrams
> [ 3] Server Report:
> [ 3] 0.0-10.0 sec 6.58 GBytes 5.66 Gbits/sec 0.000 ms 141142/4950578 (2.9%)
What's the MTU on your lo interface, though? I would guess either ~16 KB
or 64 KB [0], if you haven't altered the defaults. What happens if you
force the TCP test to use a ~1.5 KB MTU, or force the UDP test to use a
~64 KB datagram size?
[0] http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=0cf833aefaa85bbfce3ff70485e5534e09254773
--
Robert Edmonds
More information about the dns-operations
mailing list