nanog mailing list archives

Carriage Burst Speeds [was Re: Proving Gig Speed]


From: Reuben Farrelly via NANOG <nanog () nanog org>
Date: Tue, 17 Jul 2018 19:20:13 +1000

On 17/07/2018 5:49 pm, James Bensley wrote:
Also there are SO many variables when testing TCP you MUST test using
UDP if you want to just test the network path. Every OS will behave
differently with TCP, also with UDP but the variance is a lot lower.

One of the issues I have repeatedly run into is an incorrect burst size set on shaped carriage circuits. In the specific case I have in mind now, I don't recall what the exact figures were - but the carrier side configuration was set by the carrier to be something like a 64k burst on a 20M L3 MPLS circuit. End to end speed testing results both with browsers and with iperf depended enormously on if the test was done with TCP or UDP.

Consequently the end customer was unable to get more than 2-3MBit/sec of single stream TCP traffic through the link. The carrier insisted that because we could still get 19+ MBit/sec of UDP then there was no issue. This was the same for all operating systems. The end customer certainly didn't feel that they were getting the 20Mbit circuit they were sold.

The carrier view was that as we were able to get 20 TCP streams running concurrently and max out the link that they were providing the service as ordered. After many months of testing and negotiating we were able to get the shaper burst increased temporarily, and the issue completely went away. The customer was able to get over 18MBit/sec of continuous TCP throughput on a single stream. I was told that despite this finding and admission that the burst was indeed way too small, the carrier was going to continue to provision circuits with almost no burst, because this was their "standard configuration".

The common belief seemed to be that a burst was a free upgrade for the customer. I was of the alternate view that this parameter was required to be set correctly for TCP to function properly to get their quoted CIR.

I'd be very interested in other's thoughts with regards to testing of this. It seems to me that measuring performance with UDP only means that this very critical real-world aspect of a circuit (burst size on a shaper) is not tested, and this seems to be a very common misconfiguration. In my case...seen across multiple carriers over many years and many dozens of hours spent on "faults" related to it.

[NB: I've always used the rule of thumb that the L3 burst size should be about 1/8th the Contracted Line Rate, but there seems to be no consensus whatsoever about that...certainly no agreement whatsoever within the carrier world]

Reuben


Current thread: