nanog mailing list archives

Re: 10g residential CPE


From: Baldur Norddahl <baldur.norddahl () gmail com>
Date: Sat, 26 Dec 2020 20:40:48 +0100

It was not meant to be a test as such, just a demonstration. Netnod to
Bahnhof is full speed and the third server is mine, so all three servers
can deliver at least 1G.

Finding a speedtest.net server at least 1000 km away that will show full
speed at 1G is hard. Namely because most such servers have at least 10G NIC
and that is not an advantage.

It is possible to get 1G at 10 ms, I did demonstrate that myself with the
test to Bahnhof. It is also possible to be limited at 30%. As the test to
Netnod shows.


lør. 26. dec. 2020 20.10 skrev Filip Hruska <fhr () fhrnet eu>:

I wouldn't rely on these numbers too much, your testing methodology is
flawed.
People don't expect RING nodes to be used as speedtest servers and so they
are usually not connected to high speed networks.

Using a classical speedtest.net (Web or CLI) application would make much
more sense, given the servers are actually connected to high speed Internet
and are tuned to achieve such speeds - which is much more akin to how the
most bandwidth demanding stuff (streaming, game downloads, system updates
from CDNs) behaves.

It's certainly possible to get 1G+ over >10ms RTT connections single
stream - the buffers are certainly not THAT small for it to be a problem -
not to mention game distribution platforms do usually open multiple
connections to maximise the bandwidth utilisation.

Re 85KB: that's just the initial window size, which will grow given tcp
window scaling is enabled (default on modern Linux).

Filip


On 26 December 2020 19:14:13 CET, Baldur Norddahl <
baldur.norddahl () gmail com> wrote:



lør. 26. dec. 2020 18.55 skrev Mikael Abrahamsson <swmike () swm pp se>:

On Sat, 26 Dec 2020, Baldur Norddahl wrote:

It is true there have been TCP improvements but you can very easily
verify
for yourself that it is very hard to get anywhere near 1 Gbps of actual
transfer speed to destinations just 10 ms away. Try the nlnog ring
network
like this:

gigabit@gigabit01:~$ iperf -c netnod01.ring.nlnog.net
------------------------------------------------------------
Client connecting to netnod01.ring.nlnog.net, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local 185.24.168.23 port 50632 connected with 185.42.136.5 port
5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   452 MBytes   379 Mbits/sec

Why would you just use 85KB of TCP window size?

That's not the problem of buffering (or lack thereof) along the path,
that
just not enough TCP window size for long-RTT high speed transfers.


That is just the starting window size. Also it is the default and I am
not going to tune the connection because no such tuning will occur when you
do your next far away download and wonder why it is so slow.

If you do the math you will realise that 379 Mbps at 10 ms is impossible
with 85 K window.

I demonstrated that it is about buffers by showing the same download from
a server that paces the traffic indeed gets the full 930 Mbps with exactly
the same settings, including starting window size, and the same path
(Copenhagen to Stockholm).

Regards

Baldur




--
Sent from my Android device with K-9 Mail. Please excuse my brevity.


Current thread: