nanog mailing list archives

Re: 10g residential CPE


From: Mel Beckman <mel () beckman org>
Date: Sat, 26 Dec 2020 19:49:52 +0000

The thing is that the pandemic has changed the game on the ground: there is an actual feature differentiator to be had. 
But having dealt with the Linksys folks in the past I don't put out much hope that they'll take advantage of it. The 
software development side was a vast black hole where time stands still. It seems the entire industry is like that.

Michael,

Even 100 Mbps Internet is fine for Zoom, as long as the uplink speed is at least 10 Mbps. The average zoom session 
requires 2 Mbps up and down, and even for the lavish six-screen executive sessions, 6 Mbps is plenty good. So arguing 
that 10 GbE is necessary because “pandemic has changed the game on the ground” is silly.

https://support.zoom.us/hc/en-us/articles/204003179-System-requirements-for-Zoom-Rooms#h_b48c2bfd-7da0-4290-aae8-784270d3ab3f

So, sorry, 10 GbE is a hobbyists fantasy, not a marketable product. If hobbyists want 10GbE, let them pay for it like 
the rest of us, and let them play CoD from inside  freezing data center :)

 -mel

On Dec 26, 2020, at 11:12 AM, Filip Hruska <fhr () fhrnet eu> wrote:

 I wouldn't rely on these numbers too much, your testing methodology is flawed.
People don't expect RING nodes to be used as speedtest servers and so they are usually not connected to high speed 
networks.

Using a classical speedtest.net (Web or CLI) application would make much more sense, given the servers are actually 
connected to high speed Internet and are tuned to achieve such speeds - which is much more akin to how the most 
bandwidth demanding stuff (streaming, game downloads, system updates from CDNs) behaves.

It's certainly possible to get 1G+ over >10ms RTT connections single stream - the buffers are certainly not THAT small 
for it to be a problem - not to mention game distribution platforms do usually open multiple connections to maximise 
the bandwidth utilisation.

Re 85KB: that's just the initial window size, which will grow given tcp window scaling is enabled (default on modern 
Linux).

Filip


On 26 December 2020 19:14:13 CET, Baldur Norddahl <baldur.norddahl () gmail com> wrote:


lør. 26. dec. 2020 18.55 skrev Mikael Abrahamsson <swmike () swm pp se<mailto:swmike () swm pp se>>:
On Sat, 26 Dec 2020, Baldur Norddahl wrote:

It is true there have been TCP improvements but you can very easily verify
for yourself that it is very hard to get anywhere near 1 Gbps of actual
transfer speed to destinations just 10 ms away. Try the nlnog ring network
like this:

gigabit@gigabit01:~$ iperf -c netnod01.ring.nlnog.net<http://netnod01.ring.nlnog.net>
------------------------------------------------------------
Client connecting to netnod01.ring.nlnog.net<http://netnod01.ring.nlnog.net>, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local 185.24.168.23 port 50632 connected with 185.42.136.5 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   452 MBytes   379 Mbits/sec

Why would you just use 85KB of TCP window size?

That's not the problem of buffering (or lack thereof) along the path, that
just not enough TCP window size for long-RTT high speed transfers.

That is just the starting window size. Also it is the default and I am not going to tune the connection because no such 
tuning will occur when you do your next far away download and wonder why it is so slow.

If you do the math you will realise that 379 Mbps at 10 ms is impossible with 85 K window.

I demonstrated that it is about buffers by showing the same download from a server that paces the traffic indeed gets 
the full 930 Mbps with exactly the same settings, including starting window size, and the same path (Copenhagen to 
Stockholm).

Regards

Baldur




--
Sent from my Android device with K-9 Mail. Please excuse my brevity.

Current thread: