nanog mailing list archives

Re: Proving Gig Speed


From: Mike Hammett <nanog () ics-il net>
Date: Wed, 18 Jul 2018 08:24:55 -0500 (CDT)

More speedtest and quality reporting sites\services (including internal to big content) seem more about blaming the ISP 
than providing the ISP usable information to fix it. 




----- 
Mike Hammett 
Intelligent Computing Solutions 
http://www.ics-il.com 

Midwest-IX 
http://www.midwest-ix.com 

----- Original Message -----

From: "K. Scott Helms" <kscott.helms () gmail com> 
To: "mark tinka" <mark.tinka () seacom mu> 
Cc: "NANOG list" <nanog () nanog org> 
Sent: Wednesday, July 18, 2018 7:40:31 AM 
Subject: Re: Proving Gig Speed 

Agreed, and it's one of the fundamental problems that a speed test is (and 
can only) measure the speeds from point A to point B (often both inside the 
service provider's network) when the customer is concerned with traffic to 
and from point C off in someone else's network altogether. It's one of the 
reasons that I think we have to get more comfortable and more collaborative 
with the CDN providers as well as the large sources of traffic. Netflix, 
Youtube, and I'm sure others have their own consumer facing performance 
testing that is _much_ more applicable to most consumers as compared to the 
"normal" technician test and measurement approach or even the service 
assurance that you get from normal performance monitoring. What I'd really 
like to see is a way to measure network performance from the CO/head 
end/PoP and also get consumer level reporting from these kinds of 
services. If Google/Netflix/Amazon Video/$others would get on board with 
this idea it would make all our lives simpler. 

Providing individual users stats is nice, but if these guys really want to 
improve service it would be great to get aggregate reporting by ASN. You 
can get a rough idea by looking at your overall graph from Google, but it's 
lacking a lot of detail and there's no simple way to compare that to a head 
end/CO test versus specific end users. 

https://www.google.com/get/videoqualityreport/ 
https://fast.com/# 



On Wed, Jul 18, 2018 at 8:27 AM Mark Tinka <mark.tinka () seacom mu> wrote: 



On 18/Jul/18 14:00, K. Scott Helms wrote: 


That's absolutely a concern Mark, but most of the CPE vendors that support 
doing this are providing enough juice to keep up with their max 
forwarding/routing data rates. I don't see 10 Gbps in residential Internet 
service being normal for quite a long time off even if the port itself is 
capable of 10Gbps. We have this issue today with commercial customers, but 
it's generally not as a much of a problem because the commercial CPE get 
their usage graphed and the commercial CPE have more capabilities for 
testing. 


I suppose the point I was trying to make is when does it stop being 
feasible to test each and every piece of bandwidth you deliver to a 
customer? It may very well not be 10Gbps... perhaps it's 2Gbps, or 3.2Gbps, 
or 5.1Gbps... basically, the rabbit hole. 

Like Saku, I am more interested in other fundamental metrics that could 
impact throughput such as latency, packet loss and jitter. Bandwidth, 
itself, is easy to measure with your choice of SNMP poller + 5 minutes. But 
when you're trying to explain to a simple customer buying 100Mbps that a 
break in your Skype video cannot be diagnosed with a throughput speed test, 
they don't/won't get it. 

In Africa, for example, customers in only one of our markets are so 
obsessed with speed tests. But not to speed test servers that are 
in-country... they want to test servers that sit in Europe, North America, 
South America and Asia-Pac. With the latency averaging between 140ms - 
400ms across all of those regions from source, the amount of energy spent 
explaining to customers that there is no way you can saturate your 
delivered capacity beyond a couple of Mbps using Ookla and friends is 
energy I could spend drinking wine and having a medium-rare steak, instead. 

For us, at least, aside from going on a mass education drive in this 
particular market, the ultimate solution is just getting all that content 
localized in-country or in-region. Once that latency comes down and the 
resources are available locally, the whole speed test debacle will easily 
fall away, because the source of these speed tests is simply how physically 
far the content is. Is this an easy task - hell no; but slamming your head 
against a wall over and over is no fun either. 

Mark. 



Current thread: