Interesting People mailing list archives

Re: Dave Farber Warns AgainstNetNeutrality (Washington Post)I


From: David Farber <dave () farber net>
Date: Wed, 30 Sep 2009 17:34:29 -0400



Begin forwarded message:

From: Tony Aiuto <aiuto () google com>
Date: September 29, 2009 9:33:05 PM EDT
To: Richard Bennett <richard () bennett com>
Cc: Lauren Weinstein <lauren () vortex com>, Vint Cerf <vint () google com>, Bob Frankston <Bob19-0501 () bobf frankston com>, nnsquad () nnsquad org Subject: [ NNSquad ] Re: Dave Farber Warns AgainstNetNeutrality (Washington Post)I


On Tue, Sep 29, 2009 at 4:26 PM, Richard Bennett <richard () bennett com> wrote: Nobody wants a best-efforts network. People want a network that delivers their bits within whatever bounds of latency and price are pertinent to the application that generates and consumes the bits. People don't care about bits, they care about the information that's encoded in the bits.

Richard:

You are almost correct. I want the application on my end of the connection to specify the bounds of latency and price. What I don't want is the network operator to guess what is more important to me. When the network can't deliver the latency I want, I expect it to do it's best to deliver what it can, at an appropriate reduction in price.


Similarly, nobody cares about a content-indifferent network, they care about a network that allows them to perform the content transactions they care about, again within boundaries of latency and price. NN advocates tend to conflate the interactions people want with the network with the technical means by which these transactions are carried out. It's more useful to take a layered approach to these things than to mash unrelated issues together.

Again, almost right. People don' care about what we *call* the network. but they do care about the effect. A content-indifferent network IS one where the applications can determine the relative worth of packets. Content indifference is the tool that allows the user to perform the transactions they care about. If the network tries to undermine my valuation, it acts as an interfering market maker. Centuries ago, commerce realized that market brokers should not charge different commissions for different transactions of the same class. Networks should not change policy based on their assumption of relative valuations. All bits are just bits, until the user's machine assigns a value to them.

We would not tolerate a world where a broker could charge a lower commission for stocks that they underwrite. Why would we tolerate a world where basic infrastructure could do that?



Bob Frankston wrote:

How do you create the illusion of a best-efforts content-indifferent network other than by doing it?


From: Richard Bennett [mailto:richard () bennett com]
Sent: Monday, September 28, 2009 23:08
To: Bob Frankston
Cc: 'Vint Cerf'; 'Lauren Weinstein'; nnsquad () nnsquad org; Dave Farber
Subject: Re: [ NNSquad ] Re: Dave Farber Warns Against Net Neutrality (Washington Post)I


Application developers want to be able to act as if the network were perfectly transparent, neutral, and reliable. The goal of a good network architecture is to create that illusion for them. That's not to say that the means by which this illusion is created demand a passive network or a passive network operator. You can have a network that looks to you like it's a fat dumb pipe or you can have one that really is, but you can't have both. Why does anyone care, anyhow?

RB

Bob Frankston wrote:

The only difference between internetworking and networking is whether we have a flat address space or a two-tier address space. Otherwise it’s a network. Sure, I’d prefer ambient connectivity (http://rmf.vc/?n=IAC ) but if we’re not willing to take the next step then it’s a just a network and whatever applied to the Inter-network applies to the network. And that includes being indifferent to the content of the packets.


So I’m confused. If the success of the Internet as a concept is do being agnostic about traffic then why would we want to declare the end of a very successful experiment and revert to the old days when the network operator shaped the network to favor certain services. The majority may to do the same-old in lock-step but the future lies with those who are ahead of or maybe just away from the crowd. Who knows which will be the new same-old?


Ideally there would be no need for regulation and we would instead apply antitrust to remove the conflict of interest in inherent in funding the network by selling services. Service-funding is the telephony model. And as long as we’re stuck in telephony then insisting that bits are bits (as I explain in http://frankston.com/?N=IPM3 ) the FCC is honoring the experiment by trying to assure it continues. I do believe that it’s awkward to do so by adding more rules but it’s better the giving up entirely on the successful experiment.



-----Original Message-----
From: nnsquad-bounces+nnsquad=bobf.frankston.com () nnsquad org [mailto:nnsquad-bounces+nnsquad=bobf.frankston.com () nnsquad org] On Behalf Of Richard Bennett
Sent: Monday, September 28, 2009 04:59
To: Vint Cerf
Cc: Lauren Weinstein; nnsquad () nnsquad org
Subject: [ NNSquad ] Re: Dave Farber Warns Against Net Neutrality (Washington Post)I


The problem with turning the Internet over to the regulators at this

point is two-fold, as I see it. In the first place, the Internet

community hasn't done an adequate job of explaining the design rationale

for the system as it exists today. You can see from my paper that basic

concepts like end-to-end have been justified according to a multitude of

different reasons. When we see this in tech specs, it's a clue that the

principle in question is more a side-effect than a true principle. The

"stupid network," end-point-heavy formulations are misleading. People

treat the Internet like a network, because that's what they need. The

architecture of an Internet is simply agnostic about questions of

network reliability, traffic shaping, active queue management, and tiers

of service simply because they're out of scope; they're network issues

rather than Internetwork issues. An Internet isn't neutral or

non-neutral; if anything, it's neutral about neutrality. The real

rationale for the datagram network architecture was to create a space

for experimentation; that's why everybody embraced it as soon as it was

formulated. This internetting thing was actually a flop; we actually

have one big network made of self-similar parts, not a bunch of

different ones. Interconnection works best if everybody runs all the

same protocols, so we do.


So when you ask the FCC and similar bodies in other countries to

regulate the Internet, they will happily take the task, but they're

simply going to fall back on their telephony models because lawyers are

addicted to precedent and nobody has given them a better frame of

reference. And once the regulators start making rules, you're going to

lose the little bit of dynamism that's still in the Internet; how much

technical progress has there been in the phone network since the

Carterfone rules went down? Not a hell of a lot. I don't want the one

big network frozen like a fly in amber just yet.


Vint Cerf wrote:

> Dave,

>

> I think some very serious effort is underway at FCC to be much more

> precise about what is meant and measurable about the notion of

> transparent and non-discriminatory service. I agree that clarity is

> important here. I think it is possible to achieve clarity and that it

> is important that we attempt this because to ignore the problem space

> is to leave the users very much at risk.

>

> vint

>

> On Sep 26, 2009, at 4:42 PM, David Farber wrote:

>

>> Vint,  believe you misinterpret what I said in writing and

>> interviews. I have never said that regulation is not good. What I

>> have said is that hazy  and ambiguous terms that have been used on

>> dangerous to innovation.  Suppose you were about to build a new

>> building and the regulations said it should be "reasonable", "open",

>> "fair". An architect attempting to design such a building would face

>> a very confused task. You may have the building  mostly built  and

>> then find that your assumptions about what these terms mean were

>> wrong. You may face lawsuits by your neighbors over what these terms

>> mean as well as facing the need to sue the city etc.

>>

>> The bane of many such regulations is that all it does is to slow down

>> innovation and create jobs for lawyers.

>>

>> I'd be happy to join a SMALL group which attempted to create a set of

>> principles and a framework for regulation which avoided these pitfalls.

>>

>> Dave

>>

>> I have said often that leaving the future of the Internet to the

>> Congress is even more dangerous. Witness the 96 act and what it did

>> to the CLECs.

>> On Sep 26, 2009, at 7:51 AM, Vint Cerf wrote:

>>

>> I think Dave's position, which is largely unchanged, is that

>> regulation is never right. Plainly, I disagree here and believe that

>> it is entirely possible to establish a fair framework in which it is

>> not necessary for broadband service providers to do anything more

>> than manage congestion and allocation of capacity in a fashion

>> commensurate with the service level to which the users have subscribed.

>>

>> vint

>>

>> On Sep 25, 2009, at 10:43 PM, Lauren Weinstein an architect

>>>

>>> Dave Farber Warns Against Net Neutrality (Washington Post)

>>>

>>> http://bit.ly/uAC2i  (Washington Post)

>>>

>>> --Lauren--

>>> NNSquad Moderator

>>

>>

>>

>


--

Richard Bennett

Research Fellow

Information Technology and Innovation Foundation

Washington, DC




--
Richard Bennett
Research Fellow
Information Technology and Innovation Foundation
Washington, DC

--
Richard Bennett
Research Fellow
Information Technology and Innovation Foundation
Washington, DC



--
http://go/bat




-------------------------------------------
Archives: https://www.listbox.com/member/archive/247/=now
RSS Feed: https://www.listbox.com/member/archive/rss/247/
Powered by Listbox: http://www.listbox.com

Current thread: