nanog mailing list archives

Re: New minimum speed for US broadband connections


From: Raymond Burkholder <ray () oneunified net>
Date: Mon, 31 May 2021 23:28:16 -0600

On 5/31/21 7:14 PM, Mike Hammett wrote:

Yes, WFH (or e-learning) is much more likely to have simultaneous uses.

Yes, I agree that 3 megs is getting thin for three video streams. Not impossible, but definitely a lot more hairy. So then what about moving the upload definition to 5 megs? 10 megs? 20 megs? Why does it need to be 100 megs?

From a packet delivery mechanism, it comes down to things like:

* serialization delay
* buffering
* micro bursting
* high priority traffic delays

Back in the good 'ole days, we would QoS the heck out of traffic types to fit the traffic into limited size pipes.

On the home front, the skills aren't necessarily available for such finesse. The alternate solution is to throw bandwidth at the problem. With 100 mpbs, it is less about filling the pipe, and more about serialization delay for the microbursts and reducing buffering.

Hence, the upstream ISP can over-subscribe the link quite nicely and typically not worry about full utilization (but for those who torrent their pipe full all the time). But for the family with kids who need the elearning, and the parents WFH on video, trying to mix those various streams upstream can be problematic.

And then head office decides to push out microspft windows updates, or the kids computers download the latest 100G games, so download gets 'fuller', and the return acks take up upstream bandwidth.

So the trick with high bandwidth pipes is that you reduce the amount of time the pipe is full. High serialization rates. High mixing rates. Low packet buffering. High priority packets find it easier to get in/out with out having to apply QoS.

In other words, when you talk to network engineers, you'll hear them talk about elephant flows, acks, high priority, micro-bursts, ... each requiring slightly different traffic management requirements on low jto mid bandwidth connections.

When Cisco was first getting their ip telephony system working on their campus, they found that they were still loosing packets and having packet delivery problems, even with their high end 4500 and 6500 switches in floor wiring closets. Microburst and buffers.

Many of these problems fade away (don't disappear, just fade away), on high speed links.

So having repeatedly repeated myself multiple times, it isn't high speed links purely to saturate them all the time, it comes down to serialization delay and low buffers, and hot potato'ing the packets in/out as quick as possible, and letting the upstream ISP over-subscribe the whole system. And hoping they don't choke the packets in their network.

Statistical multiplexing, yeah man!


Current thread: