Interesting People mailing list archives

Re: a wise word from a long time network person -- Merccurynews report on Stanford hearing


From: David Farber <dave () farber net>
Date: Mon, 21 Apr 2008 16:42:39 -0700


________________________________________
From: Tony Lauck [tlauck () madriver com]
Sent: Saturday, April 19, 2008 1:48 PM
To: David Farber
Subject: Re: [IP] a wise word from a long time network person -- Merccurynews report on Stanford hearing

There will always be the potential for congestion in *any* shared system
that is not grossly over configured. This means there will always be the
possibility for congestion in any ISP's network if that ISP has the
slightest chance of running a viable business.  Therefore, and this is
the part where I'm sure Brett and I agree, there will *always* be the
necessity to manage congestion in an ISP's network.

I have no objection to Comcast's managing its network performance. My
objection has been to the *form* of Comcast's management, namely the
forging of RST packets. I have also objected to Comcast and others
demonizing particular applications protocols or network users. I
particularly object to those who criticize the Internet Architecture or
IETF without a thorough understanding of the technical issues. I first
began working in the area of network congestion management in 1977 when
I became chief network architect at Digital Equipment Corporation. In
the course of my career at DEC I was instrumental in steering a number
of researchers into this area, including Raj Jain and K.K. Ramakrishnan,
as well as developing several patents of my own. At the time I told
these researchers that this could be a career field if they wanted and
not just a project.

While many aspects of network performance have become engineering
issues, there are still others that are more properly research issues.
Because of the complexity of this area, in my opinion the FCC would be
ill advised to promulgate regulations that affect congestion management.
On the other hand, I would have no problem with the FTC enforcing
transparent customer agreements.

With dedicated links such as DSL, congestion can, should and is managed
at the access multiplexer or router.  With dedicated links, congestion
appears in the form of a queue inside an intelligent device. At this
point, IETF congestion management mechanisms come into play, and
performance can be managed by queue discipline and discard policy.
However the actual policies are not specified by the IETF, because they
are what determine "fair" access. The entire concept of "fair" access
depends on what constitutes a "user" and what constitutes "fair" service
for that user. This is something that is determined jointly by the ISP
and the customer when a customer signs up for network service. All I ask
is that these policies be something that ordinary customers as well as
network experts can understand. This precludes policies that allow only
"reasonable" usage or that disconnect customers for "excessive" usage,
without defining these terms. In addition, if usage is limited, than I
would expect that the ISP provides customers with simple tools to
monitor their usage.  These can be similar to the control panel usage
monitors provided by shared web hosting companies.

As Brett correctly points out, there is at least one other potential
bottleneck or cost accumulation point, namely the ISP backbone access
link(s). (Depending on geographic considerations, the cost of backbone
bandwidth may be more or less significant than last mile costs.) Routers
attached to backbone access links can use queue management disciplines
to enforce per customer fairness or this can be done at the access
router or access multiplexer. Alternatively, backbone access can be
monitored and users can be discouraged from excessive usage by usage
based tariffs. All I ask is that these charges be open and that the
users have a simple way to monitor their usage.

Brett has raised a third issue, which is that distributed uploading by
P2P networks is inefficient and uneconomic compared with more
centralized approaches. This may be true in some instances, particularly
with rural networks. However, when looking at the relative costs of
multiple approaches it is important to consider *all* the costs
involved. These include more than the uplink costs associated with P2P
networks. They include the costs associated with uploading data to
traditional web and ftp servers, the costs of running these servers and
the costs of bandwidth these servers use in sending files. In some cases
P2P mechanisms will be more efficient than centralized servers. Two
examples come immediately to mind:  (1) A home user "publishing" a file
that is never accessed. If a centralized server is used there will be a
totally unnecessary network transfer uploading the file.  (2) A home
user sharing an extremely popular file with many other ISP customers.
Here the P2P network may reduce the number of copies downloaded over the
ISP's backbone access links.

I am encouraged by Comcast's newly stated intention to cooperate with
Bittorrent. There are significant economies to be realized if all the
players cooperate. Unfortunately, there are other factors that may come
into play, for example Copyright issues that may prevent ISPs from
running their own P2P caching clients.

Tony Lauck
www.aglauck.com



David Farber wrote:
________________________________________
From: Brett Glass [brett () lariat net]
Sent: Friday, April 18, 2008 10:06 PM
To: David Farber; ip
Subject: Re: [IP] a wise word from a long time network person -- Merccurynews report on Stanford hearing

At 11:24 AM 4/18/2008, Tony Lauck wrote:

Comcast's technical problems are not with the Internet, they are with
their DOCSIS 2.0 cable modems, which have limited shared upstream
bandwidth and an ineffective multiple access protocol. Presumably these
problems will go away when Comcast finally upgrades to DOCSIS 3.0. Other
last mile network technologies such as DSL and fiber do not have these
problems.

This is incorrect. The congestion problems that affect DOCSIS over
the neighborhood shared cable affect DSL at the DSLAM. And
experience in Japan has demonstrated that adding more bandwidth --
at least up to 100 Mbps per user -- does nothing to satisfy P2P's
appetite for bandwidth. In fact, if there is a limit to that
appetite, no one can say what it is... because it has never been observed.

Also, the problem of cost shifting via P2P is not dependent upon
the last mile technology at all. It affects all ISPs in proportion
to their upstream bandwidth costs, as I demonstrated during the hearing.

I wish I'd had more time to speak. Professor Lessig's talk was
disappointing in that it was short on facts and very long on
rhetoric. Many of the assertions were unsupported, and there were
some ad hominem arguments against Internet providers. (At one
point, he likened them to bloodthirsty tigers.) His slides had no
graphs, charts, figures, or data from credible sources -- just a
few quotes (Gerald Faulhaber was quoted) and words from his talk.
In short, I would rate it as a good sermon... but a poor argument.
You've heard the old lawyers' saying: "When the law is against you,
pound on the facts. When the facts are against you, pound on the
law. When both are against you, pound on the table."

With all due respect to Larry, whom I admire and who is a truly
brilliant lawyer, this particular talk pounded just about entirely
on the table.

I was potentially a viable opponent, and could have provided a
counterargument to every one of Dr. Lessig's points. But the
structure of the forum prevented this. Larry rambled for 50
minutes, putting the meeting behind schedule. I spoke as fast as I
could for eight very rushed ones. Given the cost of flying from
Wyoming, I estimate that I paid at least $100 per minute to speak
before the Commissioners -- not counting the two and a half days of
work I lost by coming to speak. Larry, on the other hand, was being
paid. (As a Stanford professor, he makes more than I do as a rural
wireless broadband provider.)

Was it worth it? I'm not sure, but when I received the last minute
call I realized that I had no choice. I drove to the Denver airport
that afternoon through a whiteout snowstorm and flew to California
to speak. The text of my prepared remarks, which I wrote on the
plane and of which I was able to deliver about half, is at

http://www.brettglass.com/FCC

on my Web site.

As Commissioner Robert McDowell pointed out during the hearing, I
was the sole representative of my entire industry who came to speak
at the hearing. No one would, or could, speak for me. And my
livelihood -- and my 15 years' mission to bring competitive
broadband where it never would be available otherwise -- was on the line.

If Larry had yielded me merely 5 minutes of the time consumed by
hhis speech -- which was full of long, dramatic pauses I couldn't
afford to make -- it would have been sufficient for me to make more
than a dozen additional points that I did not have the chance to
address at the hearing.

I would welcome the opportunity to engage Larry in a real,
substantive, unhurried debate on this issue.

--Brett Glass, Founder and Owner, LARIAT


-------------------------------------------
Archives: http://www.listbox.com/member/archive/247/=now
RSS Feed: http://www.listbox.com/member/archive/rss/247/
Powered by Listbox: http://www.listbox.com


-------------------------------------------
Archives: http://www.listbox.com/member/archive/247/=now
RSS Feed: http://www.listbox.com/member/archive/rss/247/
Powered by Listbox: http://www.listbox.com


Current thread: