Interesting People mailing list archives

Re: : a wise word from a long time network person -- Merccurynews report on Stanford hearing


From: David Farber <dave () farber net>
Date: Tue, 22 Apr 2008 08:38:33 -0700


________________________________________
From: Brett Glass [brett () lariat net]
Sent: Monday, April 21, 2008 9:43 PM
To: David Farber; ip
Subject: Re: [IP] Re:   a wise word from a long time network person -- Merccurynews report on Stanford hearing

At 05:42 PM 4/21/2008, Tony Lauck wrote:

There will always be the potential for congestion in *any* shared system
that is not grossly over configured. This means there will always be the
possibility for congestion in any ISP's network if that ISP has the
slightest chance of running a viable business.  Therefore, and this is
the part where I'm sure Brett and I agree, there will *always* be the
necessity to manage congestion in an ISP's network.

Yes, I do agree.

I have no objection to Comcast's managing its network performance. My
objection has been to the *form* of Comcast's management, namely the
forging of RST packets.

My objection has been to the use of the pejorative term "forging" or
"forgery." A RST packet is a perfectly good and legitimate way of
informing the ends of a TCP socket that it is being terminated.

To understand why, think about what would happen if the socket were
merely blocked by firewalling. The two sides would retry... and retry...
and retry before giving up. And by doing so, they'd congest the
network -- defeating the very purpose of terminating the socket. RST
packets, on the other hand, inform the two sides that the socket has
been terminated and there is no point in continuing to retry. Fast,
efficient, and actually better for the ends (in terms of resource
consumption) than the alternative.

I have also objected to Comcast and others
demonizing particular applications protocols or network users.

Again, the pejorative term "demonizing."

While it is possible to block rogue applications without knowing what
they are, it only makes sense to apply knowledge of those applications'
characteristics and behavior if one has that knowledge. Just as a
virus checker has "patterns" that can help it identify and remove
an undesirable application from the user's computer, a bandwidth
management appliance can and should be able to identify an application
that is hogging bandwidth. In fact, it's better, because if the goal
is merely to throttle the application back rather than stopping it cold,
this makes it possible. Knowledge always helps. In fact, had the
bandwidth limiting appliance used by Comcast had a greater knowledge
of protocols and done more careful identification of applications,
there would not have been a problem with Lotus Notes on its networks --
a problem for which it was harshly criticized.

I particularly object to those who criticize the Internet Architecture

I note the capital letters here, as if there were some edict from on
high that was infallible or perfect.

or IETF without a thorough understanding of the technical issues.

In what way do those critics fail to understand the technical issues?

While many aspects of network performance have become engineering
issues, there are still others that are more properly research issues.
Because of the complexity of this area, in my opinion the FCC would be
ill advised to promulgate regulations that affect congestion management.
On the other hand, I would have no problem with the FTC enforcing
transparent customer agreements.

On this, we agree.

With dedicated links such as DSL, congestion can, should and is managed
at the access multiplexer or router.

This may not be sufficient. Congestion may occur elsewhere in the network.

With dedicated links, congestion
appears in the form of a queue inside an intelligent device. At this
point, IETF congestion management mechanisms come into play,

There is only one widely implemented "IETF" congestion management
mechanism, alas. And it is one that operates at the ends.

The entire concept of "fair" access
depends on what constitutes a "user" and what constitutes "fair" service
for that user. This is something that is determined jointly by the ISP
and the customer when a customer signs up for network service.

On this we also agree. We tell users that their terms of service on a
residential connection include a prohibition against P2P or the operation
of servers.

All I ask
is that these policies be something that ordinary customers as well as
network experts can understand.

Unfortunately, it is often the application providers who prevent them from
understanding it. When a user installs the "downloading" software that lets
him or her access content, he or she may not be properly informed that the
software turns the machine into a server -- consuming its resources and
violating the user's contract with the ISP.

As Brett correctly points out, there is at least one other potential
bottleneck or cost accumulation point, namely the ISP backbone access
link(s). (Depending on geographic considerations, the cost of backbone
bandwidth may be more or less significant than last mile costs.) Routers
attached to backbone access links can use queue management disciplines
to enforce per customer fairness or this can be done at the access
router or access multiplexer. Alternatively, backbone access can be
monitored and users can be discouraged from excessive usage by usage
based tariffs.

As I stated in my remarks to the FCC:

Some parties claim that we should meter all connections by the bit. But this would be bad for consumers for several 
reasons. Firstly, users tell us overwhelmingly that they want charges to be predictable. They don't want to worry about 
the meter running or about overage charges -- one of the biggest causes of consumer complaints against cell phone 
companies. Secondly, users aren't always in control of the number of bits they download. Should a user pay more because 
Microsoft decides to release a 2 gigabyte service pack for Windows Vista? Or because Intuit updates Quicken or 
Quickbooks? Or because a big virus checker update comes in automatically overnight? We don't think so. And we don't 
need to charge them more, so long as they are using their bandwidth just for themselves. It's when third parties get 
hold of their machines, and turn them into resource-consuming servers on our network without compensating us for those 
resources, that there's a problem. Thirdly charging by t!
he!
 bit
 doesn't say anything about the quality of the service. You can offer a very low cost per bit on a connection that's 
very unsteady and is therefore unsuitable for many things users want to do -- such as voice over IP. And finally, a 
requirement to charge by the bit could spark a price war. You can just imagine the ads from the telephone company: $1 
per gigabyte. And then the ads from the cable company: 90 cents per gigabyte. And then one or the other will start 
quoting in "gigabits" to make its price look lower, and so on and so forth. All Internet providers will compete on the 
basis of one number, even though there's much more to Internet service than that.

The problem is, small ISPs cannot win or even compete in this price war, especially when -- as is true in most places 
-- the monopolies backhaul their connections to the Internet and thus control their prices. Again, we wind up with 
duopoly.

All I ask is that these charges be open and that the users have a simple way to monitor their usage.

Interestingly, when Rogers Cable attempted to do just this -- to warn users of impending overage charges by placing 
messages in their browser windows -- the "Network Neutrality Squad" jumped on them for "tampering" with Web pages.

Brett has raised a third issue, which is that distributed uploading by
P2P networks is inefficient and uneconomic compared with more
centralized approaches. This may be true in some instances, particularly
with rural networks.

It is true in general. The network overhead is always greater, and the cost of the bandwidth at any "end" is always 
more expensive than it is at a co-location site at the backbone.

However, when looking at the relative costs of
multiple approaches it is important to consider *all* the costs
involved. These include more than the uplink costs associated with P2P
networks. They include the costs associated with uploading data to
traditional web and ftp servers, the costs of running these servers and
the costs of bandwidth these servers use in sending files.

All of these costs are lower than for P2P.

In some cases
P2P mechanisms will be more efficient than centralized servers. Two
examples come immediately to mind:  (1) A home user "publishing" a file
that is never accessed.

This is a waste no matter what. But it is likely to be rare, and it is a tiny waste compared to the huge amounts of 
waste caused by P2P.

--Brett Glass


-------------------------------------------
Archives: http://www.listbox.com/member/archive/247/=now
RSS Feed: http://www.listbox.com/member/archive/rss/247/
Powered by Listbox: http://www.listbox.com


Current thread: