Interesting People mailing list archives

Why I'm Skeptical of the FCC's Call for User Broadband Testing


From: Dave Farber <dave () farber net>
Date: Thu, 11 Mar 2010 18:15:29 -0500





Begin forwarded message:

From: Jason Livingood <jason_livingood () cable comcast com>
Date: March 11, 2010 5:43:18 PM EST
To: Dave Farber <dave () farber net>, ip <ip () v2 listbox com>, lauren () vortex com
Subject: Re: [IP] Why I'm Skeptical of the FCC's Call for User Broadband Testing


Dave: Lauren raises some fair points below. Additional comments inline below (I have cut out some of his text so this isn’t too long of a message).

- Jason Livingood

From: Lauren Weinstein <lauren () vortex com>
<snip>
After inspecting the associated site and testing tools, I'm must admit
that I am extremely skeptical about the overall value of the data
being collected by their project, except in the sense of the most
gross of statistics.

[JL] I recommend the Commission add to their form a question about what OS is being used on the customer’s PC, and whether their LAN co nnection is wired or wireless. In many cases today, I observe broad band users testing over WiFi, where things such as distance and inte rference come into play, in addition to which flavor of WiFi is bein g used and whether any WiFi security is configured. There are count less other LAN and PC-related things that dramatically influence spe ed results (web browser, memory, other apps running, HD space, other computers in use, etc.).

In random tests against my own reasonably well-calibrated tools, the
FCC tools showed consistent disparities of 50% to 85%!  Why isn't this
surprising?

[JL] I tend to agree with you and I think this also at least partially explains why the comScore results that have been cited by the Commission also show a difference similar to what you observe (there are other reasons).

<snip>
The FCC testing regime ( http://bit.ly/9IuQeC [FCC] ) provides for no
control related to other activity on users' connections.  How many
people will (knowingly or not) run the tests while someone else in the
home or business is watching video, downloading files, or otherwise
significantly affecting the overall bandwidth behavior?

[JL] Very true! Those things can obviously greatly impact speed measurements.

No obvious clues are provided to users regarding the underlying server
testing infrastructure.  As anyone who uses speed tests is aware, the
location of servers used for these tests will dramatically affect
results.  The ability of the server infrastructure to control for
these disparities can be quite limited depending on ISPs' own network
topologies.

[JL] It seems essential to understand how the test selects between Ookla and M-Labs, how many servers are behind each test, how those servers are configured, whether they are doing other tasks, and how the tests are configured (number of connections, file sizes used, etc.). Even if some of those things may be disclosed on Ookla or M- Labs’ websites, it seems like something worth specifying in FAQs on the same site as the test itself. Other than the initial selection decision-making, the other factors mentioned are major influencers o n the accuracy of any speed measurement system.

And of course, on-demand, manually-run tests cannot provide any sort
of reasonable window into the wide variations in performance that
users commonly experience on different days of the week, times of day,
and so on.

[JL] Indeed, and such tests have a self-selection bias. In addition, the tests have no ability to determine whether the speed you are shown is close to your provisioned (marketed) speed. So there is some question as to what the resulting data will lead you to conclude. If everyone in a certain ZIP code shows an average of X speed, are we to conclude that is good or bad? Is it because they all subscribe to a service at Y speed (where Y>X) or is the a difference between what they think they should be getting and what they are getting (and your questions above dig into whether that is due to factors within the user’s control or within the ISP’s control). And how can you control for the fact that many tests are likely to be run at peak hour?

<snip>
ISPs may be justifiably concerned that the data collected from these
tests by this FCC effort may be unrepresentative in significant ways.

[JL] Indeed. I suspect we will all learn more next week about what direction this is all heading in.



-------------------------------------------
Archives: https://www.listbox.com/member/archive/247/=now
RSS Feed: https://www.listbox.com/member/archive/rss/247/
Powered by Listbox: http://www.listbox.com

Current thread: