nanog mailing list archives

RE: Can P2P applications learn to play fair on networks?


From: "Frank Bulk" <frnkblk () iname com>
Date: Tue, 23 Oct 2007 22:37:37 -0500


My apologies if I wasn't clear -- my point was that caching toward the
client base changes installed architectures, an expensive proposition.  If
caching will find any success it needs to be at the lowest possible price
point, which means collocating where access and transport meet, not in the
field.

I have little reason to believe that providers are going to cache for the
internet to solve their last-mile upstream challenges.

Frank 

-----Original Message-----
From: Rich Groves [mailto:rich () richgroves com] 
Sent: Monday, October 22, 2007 11:49 PM
To: frnkblk () iname com; nanog () merit edu
Subject: Re: Can P2P applications learn to play fair on networks?

Frank,

The problem caching solves in this situation is much less complex than what
you are speaking of. Caching toward your client base brings down your
transit costs (if you have any)........or lowers congestion in congested
areas if the solution is installed in the proper place. Caching toward the
rest of the world gives you a way to relieve stress on the upstream for
sure.

Now of course it is a bit outside of the box to think that providers would
want to cache not only for their internal customers but also users of the
open internet. But realistically that is what they are doing now with any of
these peer to peer overlay networks, they just aren't managing the boxes
that house the data. Getting it under control and off of problem areas of
the network should be the first (and not just future) solution.

There are both negative and positive methods of controlling this traffic.
We've seen the negative of course, perhaps the positive is to give the user
what they want ......just on the providers terms.

my 2 cents

Rich
--------------------------------------------------
From: "Frank Bulk" <frnkblk () iname com>
Sent: Monday, October 22, 2007 7:42 PM
To: "'Rich Groves'" <rich () richgroves com>; <nanog () merit edu>
Subject: RE: Can P2P applications learn to play fair on networks?


I don't see how this Oversi caching solution will work with today's HFC
deployments -- the demodulation happens in the CMTS, not in the field.
And
if we're talking about de-coupling the RF from the CMTS, which is what is
happening with M-CMTSes
(http://broadband.motorola.com/ips/modular_CMTS.html), you're really
changing an MSO's architecture.  Not that I'm dissing it, as that may be
what's necessary to deal with the upstream bandwidth constraint, but
that's
a future vision, not a current reality.

Frank

-----Original Message-----
From: owner-nanog () merit edu [mailto:owner-nanog () merit edu] On Behalf Of
Rich
Groves
Sent: Monday, October 22, 2007 3:06 PM
To: nanog () merit edu
Subject: Re: Can P2P applications learn to play fair on networks?


I'm a bit late to this conversation but I wanted to throw out a few bits
of
info not covered.

A company called Oversi makes a very interesting solution for caching
Torrent and some Kad based overlay networks as well all done through some
cool strategically placed taps and prefetching. This way you could "cache
out" at whatever rates you want and mark traffic how you wish as well.
This
does move a statistically significant amount of traffic off of the
upstream
and on a gigabit ethernet (or something) attached cache server solving
large
bits of the HFC problem. I am a fan of this method as it does not require
a
large foot print of inline devices rather a smaller footprint of statics
gathering sniffers and caches distributed in places that make sense.

Also the people at Bittorrent Inc have a cache discovery protocol so that
their clients have the ability to find cache servers with their hashes on
them .

I am told these methods are in fact covered by the DMCA but remember I am
no
lawyer.

Feel free to reply direct if you want contacts


Rich


--------------------------------------------------
From: "Sean Donelan" <sean () donelan com>
Sent: Sunday, October 21, 2007 12:24 AM
To: <nanog () merit edu>
Subject: Can P2P applications learn to play fair on networks?


Much of the same content is available through NNTP, HTTP and P2P. The
content part gets a lot of attention and outrage, but network engineers
seem to be responding to something else.

If its not the content, why are network engineers at many university
networks, enterprise networks, public networks concerned about the impact
particular P2P protocols have on network operations?  If it was just a
single network, maybe they are evil.  But when many different networks
all start responding, then maybe something else is the problem.

The traditional assumption is that all end hosts and applications
cooperate and fairly share network resources.  NNTP is usually considered
a very well-behaved network protocol.  Big bandwidth, but sharing network
resources.  HTTP is a little less behaved, but still roughly seems to
share network resources equally with other users. P2P applications seem
to be extremely disruptive to other users of shared networks, and causes
problems for other "polite" network applications.

While it may seem trivial from an academic perspective to do some things,
for network engineers the tools are much more limited.

User/programmer/etc education doesn't seem to work well. Unless the
network enforces a behavor, the rules are often ignored. End users
generally can't change how their applications work today even if they
wanted too.

Putting something in-line across a national/international backbone is
extremely difficult.  Besides network engineers don't like additional
in-line devices, no matter how much the sales people claim its fail-safe.

Sampling is easier than monitoring a full network feed.  Using netflow
sampling or even a SPAN port sampling is good enough to detect major
issues.  For the same reason, asymetric sampling is easier than requiring
symetric (or synchronized) sampling.  But it also means there will be
a limit on the information available to make good and bad decisions.

Out-of-band detection limits what controls network engineers can
implement
on the traffic. USENET has a long history of generating third-party
cancel
messages. IPS systems and even "passive" taps have long used third-party
packets to respond to traffic. DNS servers been used to re-direct
subscribers to walled gardens. If applications responded to ICMP Source
Quench or other administrative network messages that may be better; but
they don't.






Current thread: