Interesting People mailing list archives

Re: Why I'm Skeptical of the FCC's Call for User Broadband Testing


From: David Farber <dave () farber net>
Date: Thu, 11 Mar 2010 20:34:57 -0500



Begin forwarded message:

From: "John S. Quarterman" <jsq () quarterman org>
Date: March 11, 2010 4:59:16 PM EST
To: dave () farber net
Cc: "John S. Quarterman" <jsq () quarterman org>, "ip" <ip () v2 listbox com>, Lauren Weinstein <lauren () vortex com>
Subject: Re: [IP] Why I'm Skeptical of the FCC's Call for User Broadband Testing 

Dave: for IP.

From: Lauren Weinstein <lauren () vortex com>
Date: March 11, 2010 3:56:32 PM EST
To: dave () farber net
Subject: Why I'm Skeptical of the FCC's Call for User Broadband  
Testing

...

After inspecting the associated site and testing tools, I'm must admit
that I am extremely skeptical about the overall value of the data
being collected by their project, except in the sense of the most
gross of statistics.

In random tests against my own reasonably well-calibrated tools, the
FCC tools showed consistent disparities of 50% to 85%!  Why isn't this
surprising?

Because it's not relevant.

The differences between the relevant speeds, such as dialup,
iPhone or MIFI speeds, 1.5Mbps, 3Mbps, 6Mbps, 10Mbps, and 100 Mbps,
are so large that 50 - 85% for a single test out of many thousands
is nothing.

Even more to the point, tests by multiple subscribers to the same
service will give a pretty good idea of what that service is really
providing.  Even if some users test while somebody else is using
the same connection, others will not, so you can get a pretty good
sense of the maximum speed being provided.

No obvious clues are provided to users regarding the underlying server
testing infrastructure.  As anyone who uses speed tests is aware, the
location of servers used for these tests will dramatically affect
results.  The ability of the server infrastructure to control for
these disparities can be quite limited depending on ISPs' own network
topologies.

Without the drama, most bottlenecks are in the last connection to the
user, and the few percent difference caused by the long-haul infrastructure
is irrelevant for this purpose.

And of course, on-demand, manually-run tests cannot provide any sort
of reasonable window into the wide variations in performance that
users commonly experience on different days of the week, times of day,
and so on.

If you get enough such tests, yes, they can, across a range of users.

Users are required to provide their street address information with
the tests, but there's nothing stopping anyone from entering any
address that they might wish, suggesting that such data could often be
untrustworthy compared with (much coarser) already available IP
address-based location info.

One would assume the FCC knows this and will do some cross-checks.

Lauren's objections illustrate the problem with most Internet metrics:
they're all about detailed precision.  That's great if you're trying
to, for example, tune individual routers.

For policy, what is needed is a large scale view that will show
much broader information.

As Lauren says:

 While these tests under this methodology may serve to help categorize
 users into very broad classes of Internet service tiers,

And that's the point, isn't it?

Especially compared to what the providers claim they're delivering.

-jsq




-------------------------------------------
Archives: https://www.listbox.com/member/archive/247/=now
RSS Feed: https://www.listbox.com/member/archive/rss/247/
Powered by Listbox: http://www.listbox.com


Current thread: