nanog mailing list archives

Re: 10gig coast to coast


From: Ben Aitchison <ben () meh net nz>
Date: Wed, 19 Jun 2013 13:08:30 +1200

On Tue, Jun 18, 2013 at 08:47:41PM -0400, Valdis.Kletnieks () vt edu wrote:
On Wed, 19 Jun 2013 00:24:15 -0000, James Braunegg said:

Thanks for your comments, whilst I know you can optimize servers for TCP
windowing I was more talking about network backhaul where you don't have
control over the server sending the traffic.

If you don't have control over the server, why are you allowing your
customer to make their misconfiguration your problem?  (Mostly a rhetorical
question, as I know damned well how this sort of thing ends up happening)

maybe his customers are connecting to normal internet servers.  there's a lot of
servers with strangely low limits on window size out there.

like on speedtest.net under palo alto there's "Fiber Internet Center" which seems
to have a window size of 128k.

it requests files from 66.201.42.23, and if you do something like:

curl -O  http://66.201.42.23/speedtest/random4000x4000.jpg 

then do ping 66.201.42.23 then divide 1000 by the latency, for example 1000 / 160 then
muitply by 128 then that number is about what curl will show on a fast connection.

speedtest.net seems to use 2 parallel connections which raises the speed slightly, but
it seems reasonably common to come across sites with sub-optimal tcp/ip configurations,
like a while back i noticed www.godaddy.com seems to use 2 packets initial window size, and if
using a proxy server that sends Via they seem to disable compression, so the web page will
load very slowly from a remote location using a proxy server.

Using recent Linux default kernels network speeds can be very good over high latency connections
both for small files, and larger files assuming minimal packet loss.  The combination of initial
window size of 10 packets and cubic congestion control helps both small and large tranfers, and
Linux has been improving their TCP/IP stack a lot.  But there are still quite a few less ideal
TCP/IP peers around.

Also big buffers really help microbursts of traffic on fast connections.  And using small buffers
can really increase the sawtooth effects of TCP/IP.  With all this talk of buffer bloat, in my
experience sfq works better than codel for long distance throughput.. 

Ben.


Current thread: