nanog mailing list archives

Re: Industry standard bandwidth guarantee?


From: Rafael Possamai <rafael () gav ufsc br>
Date: Thu, 30 Oct 2014 13:21:41 -0500

You can't just ignore protocol overhead (or any system's overhead). If an
application requires X bits per second of actual payload, then your system
should be designed properly and take into account overhead, as well as
failure rates, peak utilization hours, etc. This is valid for networking,
automobile production, etc etc..

On Thu, Oct 30, 2014 at 7:23 AM, Jimmy Hess <mysidia () gmail com> wrote:

On Wed, Oct 29, 2014 at 7:04 PM, Ben Sjoberg <bensjoberg () gmail com> wrote:

That 3Mb difference is probably just packet overhead + congestion

Yes...  however, that's actually an industry standard of implying
higher performance than reality,  because end users don't care about
the datagram overhead which their applications do not see they just
want X  megabits of  real-world performance, and this industry would
perhaps be better off if we called a link that can deliver at best 17
Megabits of Goodput reliably a  "15 Megabit goodput +5 service"
instead of calling it a "20 Megabit service"

Or at least appended a disclaimer   *"Real-world best case download
performance: approximately  1.8 Megabytes per second"


Subtracting overhead and quoting that instead of raw link speeds.
But that's not the industry standard. I believe the industry standard
is to provide the numerically highest performance number as is
possible through best-case theoretical testing;   let the end user
experience disappointment and explain the misunderstanding later.

End users also more concerned about their individual download rate on
actual file transfers  and not  the total averaged aggregate
throughput of the network of 10 users  or 10 streams downloading data
simultaneously,    or characteristics transport protocols are
concerned about such as fairness.


control. Goodput on a single TCP flow is always less than link
bandwidth, regardless of the link.

---
-JH



Current thread: