Interesting People mailing list archives

Re: the undead urban myth of the LOC/EID split -- is science just politics by another name? [long response]


From: David Farber <dave () farber net>
Date: Thu, 6 Nov 2008 03:48:35 -0500



Begin forwarded message:

From: Steve Crocker <steve () shinkuro com>
Date: November 6, 2008 2:59:41 AM EST
To: dave () farber net
Cc: Steve Crocker <steve () shinkuro com>, "ip" <ip () v2 listbox com>
Subject: Re: [IP] WORTH READING the undead urban myth of the LOC/EID split -- is science just politics by another name? [long response]

The TIPs also sent single or very few character messages. They were quite limited in the amount of memory .

However slow the Tenex code might have been, it was quite agile with respect to handling both long and short messages. If I used a TIP to telnet to Multics, the results were very poor. Multics allocated one message buffer at a time for a telnet session, but permitted the message to be very long. The TIP could only send very short messages. The upper levels of the Multics system didn't start processing its input until a full line, with an end-of-line indicator, had been received, and it was pretty slow in responding to the "annoying" wake ups for each character. If one telnetted from a TIP to a Tenex system and then telnetted from the Tenex system to Multics, the response was *much* faster.

Steve

On Nov 6, 2008, at 9:42 AM, David Farber wrote:



Begin forwarded message:

From: John Day <day () std com>
Date: November 5, 2008 8:49:37 PM EST
To: "David P. Reed" <dpreed () reed com>, dave () farber net
Cc: ip <ip () v2 listbox com>, "'Richard Bennett'" <richard () bennett com>, Frankston Bob <bobf () frankston com>, John Day <day () std com > Subject: Re: [IP] Re: the undead urban myth of the LOC/EID split -- is science just politics by another name? [long response]

At 10:27 -0500 2008/11/05, David P. Reed wrote:
In 1975, the purpose of the ARPANET and the Internet was *remote login*. Almost all traffic was from character or line at a time Telnet clients and their concentrator equivalents called TIPs. (and the X.25 spinout called Telenet that Larry Roberts started was optimiized even further for small packets). In fact the raging *technical* argument was that the IP and TCP headers were incredibly wasteful for terminal traffic. One of the reasons that we were forced to use 32-bit IP addresses by conservative engineers who objected to variable length addresses proposed rather seriously by a number of us.

Errr, Dave, you are right that most of the traffic in 1975 was Telnet. I know of no one who thought that was going to be the primary use, nor was anyone designing for it. In 1975-6, we had a production distributed database system running on the ARPANET serving 10s of users in the Chicago area. And we were delving into other aspects of the problem. In late 73-74, there were several proposals for a next generation of "resource sharing" protocols. The National Software Works was being done. Farber's Ring at Irvine. The Datacomputer and several other things.

As far as character at a time traffic this was unique to the Tenex' and many of us thought this was pretty silly given the resource constraints at the time. (Not in the Network, the Tenex code was so slow that one could type a line and half ahead before the echos started coming in. (You might have been able to type further, but by then I usually waited for it to catch up to see what I had.) I would direct you Padlipsky's RFC#1 (As in Ritual for Catharsis) for a delightful skreed on the problems (some Mike would probably call political, but you can ask him) of dealing the Big Bad Neighbor who insisted on delivering coal one lump at a time. ;-) Line at a time would have been, and was fine, for using Multics and other systems. Character-at-a-time could wait until at least BBN figured out how to speed up their OS code! ;-) They tried all sorts of things to improve it. Remember the Retransmission and Echo Option for Telnet that never worked?

It is really unfortunate that ARPA backed off from pursuing the vision they had started. There is a good chance that if those new efforts had not been shut down in early 1974, we wouldn't be having the problems we are seeing today.

Take care,
John


I find it strange and kind of amazing that someone who claims historical knowledge would think that the projects were set up to "favor" file transfers. Huh???

Revisionism is something that politicians do to history when it doesn't fit their dogma. An ideological partisan might see the history of the Internet as somehow pushing the idea that ISPs should be forced to do something they don't like. That's what partisans who see politics everywhere tend to do. But in fact, there were NO ISPs on the horizon in 1978-1990. So what political agenda can we argue other than that that lives in some revision and delusion of the past.

David Farber wrote:


Begin forwarded message:

*From: *Richard Bennett <richard () bennett com <mailto:richard () bennett com >>
*Date: *November 4, 2008 6:53:19 PM EST
*To: *Bob Frankston <Bob19-0501 () bobf frankston com <mailto:Bob19-0501 () bobf frankston com >> *Cc: *"'John Day'" <day () std com <mailto:day () std com>>, dave () farber net <mailto:dave () farber net>, "'ip'" <ip () v2 listbox com <mailto:ip () v2 listbox com >>, "'David P. Reed'" <dpreed () reed com <mailto:dpreed () reed com>>, "'Lauren Weinstein'" <lauren () vortex com <mailto:lauren () vortex com>> *Subject: **Re: the undead urban myth of the LOC/EID split -- is science just politics by another name? [long response]*

Citing ones own blog posts as authority doesn't go very far toward establishing the legitimacy of ones point of view, Bob. The general argument against prioritization says that as long as capacity is added to the network faster than it can be consumed, we'll all be golden. If it has to be added in such quantity that the least latency-sensitive application gets the same service as the most latency-sensitive application, the means becomes impossible. The simple reason is that the very same technology - high speed interfaces of some particular type - that allows bandwidth to be made available also allows it to be consumed. So the hidden assumption behind this "solution" is a magic technology for bandwidth creation that is better than the available technologies for bandwidth consumption. There is no such technology today, nor will there ever be one in the future. Networks for computers are made out of computers, you see, so we give bandwidth and take it away at the same time.

I found an odd passage in Wikipedia today:

http://en.wikipedia.org/wiki/From_each_according_to_his_ability,_to_each_according_to_his_need

  From each according to his ability, to each according to his need
  (or needs) is a slogan popularized by Karl Marx in his 1875
  Critique of the Gotha Program. The phrase summarizes the
  principles that, under a communist system, every person should
  contribute to society to the best of their ability and consume
  from society in proportion to their needs, regardless of how much
  they have contributed. In the Marxist view, such an arrangement
  will be made possible by the abundance of goods and services that
  a developed communist society will produce; the idea is that there
  will be enough to satisfy everyone's needs.


Where have we heard that argument about "abundance" before? Ha, no political agenda indeed.

RB

Bob Frankston wrote:
I need to run off now to a political event - yes, I do deign to do politics. Probably not worth another round but I do feel obliged to correct circular "reasoning". But I don't want to go round in circles on this - in http://www.frankston.com/?Name=IPTelecomCost <http://www.frankston.com/?Name=IPTelecomCosts> I pointed out that prices (and capacity) problems were created by the telecom industries redoubling on their mythology. I've already written a lot about why QoS doesn't make sense. The idea that carriers have to troll the network to discover true intent of the bits is just this kind of circular reasoning that adds the very complexity that causes these problems. As to Ethernet -- you say it's about commercial interests. Sure - but only very late in the game. The technology itself came first. And now we have telcos who impose their will on those of us who just want to exchange packets and instead of to contend with the speed-traps of billable events. *From:* Richard Bennett [mailto:richard () bennett com] *Sent:* Tuesday, November 04, 2008 18:10
*To:* Bob Frankston
*Cc:* 'John Day'; dave () farber net; 'ip'; 'David P. Reed'; 'Lauren Weinstein' *Subject:* Re: the undead urban myth of the LOC/EID split -- is science just politics by another name? [long response] Bob, let's not confuse the network we call Ethernet today with the old Aloha-on-a-wire system devised by David Boggs and the others at PARC. The Ethernet of today doesn't do CSMA/CD, doesn't move all frames at the same priority, and doesn't invest all intelligence at the end points. We kept the name "Ethernet" as all of the original features were stripped away for marketing purposes, but all that remains of the PARC creation is the address format.

Re: your odd remarks about Voice, apparently you're not aware that the advent of P2P made VoIP quality suffer, and that the management actions that have been deployed by Comcast and others had as their first objective the restoration of good quality to VoIP. The reason is fairly intuitive to network engineers: P2P uses hundreds of TCP streams per application instance, hence it subverts Jacobson. The DoD suite lacks per-user rate-limiting, relying on the historical mode of FTP usage (one steam per user) to accomplish same via Jacobson. Hence the ISP use DPI to identify streams and then adjust priority accordingly. This is perfectly justifiable from the standpoint of providing each application with the service it requires. As Marx said, "from each according to his abilities, to each according to his needs." This is Networking 101, but it may help to think of it as Affirmative Action if you need for the control structures embedded in the network to mimic your vision of the Utopian society.
RB

Bob Frankston wrote:
Huh?
"An undifferentiated datagram network service favors applications with the greatest appetite for bandwidth and the greatest tolerance for latency. Hence, a "best effort" datagram network which treats all packets the same is by its very nature optimized for file transfer over all other applications.". Where do you come up with such nonsense when we've seen that these protocols have made the cost of voice drop to near zero while increasingly the quality of each voice call and allowing far more creativity. My intro to Ethernet was sitting in class as Bob described his project in 1973. Let's not confuse late stage just-so stories with the origins. *From:* Richard Bennett [mailto:richard () bennett com] *Sent:* Tuesday, November 04, 2008 16:50
*To:* Bob Frankston
*Cc:* 'John Day'; dave () farber net <mailto:dave () farber net>; 'ip'; 'David P. Reed'; 'Lauren Weinstein' *Subject:* Re: the undead urban myth of the LOC/EID split -- is science just politics by another name? [long response] My goodness, from a simple observation about the social process that draws up networking standards, I now find myself tarred and feathered with a political viewpoint in favor of some sort of carrier optimization, and all without saying a word. The Internet is a wonderful thing.

Let's try and correct a few misunderstandings. In the first place, IP and the Internet are not and never have been "application agnositic" or a "level playing field." An undifferentiated datagram network service favors applications with the greatest appetite for bandwidth and the greatest tolerance for latency. Hence, a "best effort" datagram network which treats all packets the same is by its very nature optimized for file transfer over all other applications. File transfer is a machine-machine interaction, and machines can handle more data than humans and have greater patience. It turns out that file transfer is a very general sort of application paradigm, insofar as a number of apparently different applications ride on it - e- mail, video streaming, web browsing, Usenet, and software distribution - but it is a single application. Turning the Internet into an actual "application agnostic network" that can support large numbers of real-time interactions as well as file transfers is a long-standing research interest of mine, and we certainly are far from having achieved it on the IPv4 Internet today.

My introduction to networking standards was on the IEEE 802.3 Working Group in 1983, where I was part of group that had the crazy idea of taking Ethernet off of coax cable and putting it on twisted pair with a hub. Our group produced the 1BASE5 standard, and its members went on to create 10BASE-T and 100BASE-X. The general idea proved pretty popular. We worked a lot faster than subsequent committees, completing our work in two years, more or less. We we in the middle of several controversies of a political nature. Our official title was "Low-cost LAN Force" and IBM had a competing proposal of its own, based on the PC Network designed by Sytek for home networking. It was a 2 Mb/s system that ran on broadband coax cable. The general idea was that one of our two task forces would be allowed to create a standard, but not both. IBM stuffed the committee with its employees and voting rules had to be changed to prevent them from stuffing the ballot box.

Another group of opponents disliked our choice of wiring, as they had been operating under the assumption that LANs needed a very low intrinsic error rate which they felt we couldn't guarantee in all cases. So the problem with this argument was to develop a realistic, statistical model of noise rather than sticking with the "sum of all worst cases" deterministic model that was in force thanks to certain hardware guys who wanted networks to be immune from arc welders, forklifts, and all manner of warehouse equipment. So we reformed the noise model.

Another group hated our approach because it replaced an "end-to- end" network with a dumb core - the coax - with a system that put intelligence in the center of the network. They maintained that the hub was a single point of failure that would enable organizations to have too much control over network traffic. So we devised a fault-tolerant hub and waved our hands at the purely political desire to distribute intelligence to an excessive degree. People were aware that DIX Ethernet was unreliable as the failure of any transceiver actually brought down the network.

It was fairly apparent by the end of this process that some people were injecting values into the engineering process that didn't have much to do with engineering, and that they didn't map their values to the technology in a consistent way. The desire to purge the network of intelligence, for example, was apparently with a general critique of authority that we see in such statements as "we reject kings and princes in favor of rough consensus and running code."

Taking a more comprehensive approach, we can easily see that the values of grass-roots decision-making, radical democracy, and peer communication are best advanced by networks that work well, not by networks that try to mimic the structure of a radical democracy. Packets aren't people, and all the network has to do is honor their requirements as far as it can.

So sure, politics in every sense of the word are involved in social systems with voting. That's just the way people roll.

RB

Bob Frankston wrote:
I agree strongly that today's Internet hasn't done enough to provide all players with opportunity. But then it is a prototype. At a purely technical level we need to go further to decouple the EIDs from the provider-provided identifiers so we are not beholden to what I call "Internet Inc" for something so vital as our identity. The bigger problem is that we can't take advantage even today's Internet as long as we must justify our applications to a gatekeeper with an extreme conflict of interest. But we also need to challenge the political agenda that puts carriers in the untenable position of both controlling the content and the network itself. The danger in Bennett's arguments (and the associated worldview) is the assumption that the network can, let alone should, be optimized for current applications. That assures that network favors and protects today's players against the future. It gives carriers a reason to exist and, more important, limits opportunity to that which doesn't threaten the status quo and that's not much opportunity at all. I argue that it is "opportunity" that is the essential measure of success with economic value being a consequence, though one we should strive for. Why would we want to favor the incumbents over the economy, especially these days? <<< Snip snip - the rest of this goes into far more detail so those who want the central point can stop here. Those who enjoy revisiting and reliving history can continue. >>> I do find myself playing economist - but with an understanding of dynamic systems as demonstrated by the Internet. I see both accounting and economics as ways of understanding systems and information. I don't confuse with Economics as monetary policy any more than I'd confuse accounting with bookkeeping. But then I can accept the idea of politics as being part of selling your idea to others though I don't subscribe to the idea that it's just politics with no other measures of reality. "In our case, it is clear that some people are uncomfortable with the non-determinism of datagrams and others embrace it. I use to think it was an age thing, but it isn't." - Yes indeed. This is why I cite Lakoff. Not because his particular dichotomy is "correct" but because he recognizes that our world-view is driven by our basic conceptual models of the world. This is why I view Bennett's arguments as being driven by a world-view rather than mercenary interests. Unfortunately it means that the are not open to rational counter-arguments so I don't expect to convince him. But I do see it necessary to challenge them for those who are open to a discussion and can see the economic and societal damage done 80 or more years of telecom. I do want to respond to "Be careful here, Bob, your counter example (as is the carriers) is an argument that economic success is equivalent to technical success/rightness.". There is no single metric for "success" so we do need to be explicit here. This is why I've thought a lot about the measures and, just as import timescale and how systems evolve. This is why I focus on opportunity as a metric rather than current dollars. But at a societal level I do argue that economic success is a basic societal measure - something that Jared Diamond. There are other measures such spiritual success even though they may be problematic. "I doubt that carriers care one wit about proving they are right. They more likely care about whether they are profitable or not. Business models are neither right nor wrong, only more or less profitable. They might be more interested if you gave them an argument that showed them they would be more profitable pursuing a different model. They certainly aren't going to flock to your door for a model that makes their profits less." Of course! While one can argue for a more enlightened self-interest since the employees are members of society the tendency is to focus on the short term metrics. *This is why it is vital to counter those arguments *and not fall for the canard that the marketplace is magic and will right all wrongs. It is we need to update our view of antitrust and recognize that coupling markets like transport and content are what companies would try to do and why it is so vital *to not tolerate carriers having the ability to impose their unenlightened self-interest on societ*y.
Back to technology and stuff Š
I do remember the days of the MacAIMs project (1969 - just ran across some of the memos) - even before Date published. Later on there would be arguments about which models but in the early days the big problem was about efficiency and whether you should embed CCW (Chained Channel commands (W?)) in the database itself. The very idea of dealing at a higher level seemed too inefficient. Just like packets did. Sure there were commercial/political considerations and they did color people's thinking but that too came later. I do know that Ethernet and, I presume, Token Ring, and other approaches were */ initially/* about technical issues. It was only later when the Internet met telecom that the full commercial implications came to the fore. But even then, as with databases, many of those involved still argued purely about efficiency and other technical considerations. The confusion came because the challengers were indeed inefficient */by the measures of existing applications and the perceived needs/. *It was only over time when the power of fungible bits came to the fore and new applications found value that the metrics shifted. The big surprise was that the Internet model worked even better for existing telco applications once we had enough cheap bits. And we see this again in Bennett's defense of carrier policies that favor existing traffic which just happens to match their business model and creates a need. We get another level of obfuscation because the apps we happen to use are those that match the existing networks - broadband is good for video because it was designed for video. Unlike Guns, Germs and Steel we are dealing with an artificial ecology but like GGS we have a social order built upon the ecology. Perhaps Diamond's /Collapse /is more apropos. While nothing is good for everything we see in this examples that decoupling system elements works very well. While building intelligence into the system tends to work better in the short run so wins the arguments framed in terms of the status quo but loses if we every provide opportunity to discover new value. Sometimes we do have make choices as in ASCII vs EBCDIC and, at first, it was one community vs another as far as I could tell. But notice that we converged on Unicode because the value of merging communities was so large. But sometimes corporate dollars do allow you to jump ahead - this is why I like .Net though I see Mono as a necessary counter-balance and Microsoft should recognize they have an interest in that community lest their good ideas get become isolated and stagnant. One problem with today's IETF is that we've gotten ourselves dug into a hole in which we have given too much control to corporate interest by focusing too much on the complexity of today's Internet - a self-reinforcing complexity with too many having a stake in turning it into NGN. The IETF has become the incumbency and we need reinvention from the edge. I see DPI as a good example of what not to do. Not simply because it's a return to the Ptolemaic carrier-centric model and creates perverse coupling. We should see it as a suspect because it is driven by our worst fears and thus needs to be viewed with caution. Our technical decisions may be driven by agendas -- but at least we can identify corporate interests. Things are very problematic when we have basic world views that coincide with corporate interests. That's what makes it so hard to have dispassionate discussion. *From:* John Day [mailto:day () std com] *Sent:* Tuesday, November 04, 2008 11:51
*To:* Bob Frankston; dave () farber net <mailto:dave () farber net>; 'ip'
*Cc:* 'Richard Bennett'; 'David P. Reed'; 'John Day'; 'Lauren Weinstein' *Subject:* Re: the undead urban myth of the LOC/EID split -- is science just politics by another name?
At 11:29 -0500 2008/11/04, Bob Frankston wrote:

  Let's be careful here. It sounds as if Bennett would argue that
  Copernicus was just another political hack who wanted to take
  power away from the church. The argument about connectionless and
  application agnostic protocols echoes the arguments about
  relational vs hierarchical databases. Would we call the triumph
  of relational databases political?

Actually to be precise, the battle was between the Entity- Relation model and the relational model. I was not close enough to that battle at its height to comment on its sources, but it was distinctly a religious war that destroyed at least one committee. One must be careful here. I fear that people are looking for too much Newtonian determinism in the effects. I really doubt that Louis had a political or economic argument in mind when he came up with datagrams. However, he was working from a computer mindset, rather than a phone company mindset. That alone meant that computer companies, especially non-IBM ones, would find his solution aligning with their interests and at odds with the PTTs interests. None of these ideas come out fully formed and as progress is made the influence of the different agendas does have a subtle effect. In some cases, the people working on it (as Bennett points out) are thinking in terms of how does this benefit my company. More often, the people involved had inclinations toward one or the other. A major revelation in my standards experience was that what the big players did was not always overtly Machiavellian, i.e. they knew the "right" answer, but they developed arguments for the answer that favored their agenda (although there were exceptions). More often, they simply had people who fundamentally *believed* that the answer that favored the companies agenda was the "right" answer. I observed this first hand for many years, was even party to discussions in some companies where such considerations were considered in sending people to meetings. In our case, it is clear that some people are uncomfortable with the non-determinism of datagrams and others embrace it. I use to think it was an age thing, but it isn't. On the other hand, when the industry settled on ASCII and IBM did EBCDIC, wasn't that economic? when the industry did Algol and IBM did PL/1? what about UNIX vs POSIX in the 80s, Microsoft vs the industry on several occasions, the recent DVD wars, the list goes on and on. Sometimes the politics and economics are more pronounced than in others, but it is always there. But in this case, by 1976 we knew our "computer" approach to networking had put us in the middle of a big political and economic battle. And as it intensified into the 80s, it distinctly had an effect on the outcome. If nothing else, it was impossible to clearly investigate the synthesis of connection and connectionless and look for some thing truly new. Nothing is good for everything. There had to be a more comprehensive view. No this is not a simplistic "either-or" that Bob and Dave suggest, but much more probabilistic. Which is of course what makes it so interesting to watch. Why I refer to this as the Guns, Germs and Steel of networking.



  I sympathize with David's frustration with Bennett calling the
  basic Internet design decisions political. The arguments between
  the Bell-heads and the Net-heads were (and still are) technical
  not political but some of the deeply held assumptions reflect
  people's worldviews as Lakoff has observed.



These arguments are couched in technical terms but they are very much economic. I have been in far too many of them and have seen it first hand. To assume otherwise, impairs your ability to counter the arguments.

  Alas, "application agnostic" does have commercial and political
  implications in both shifting the balance of power and endorsing
  the still controversial assumption we can't prejudge good vs bad
  applications. I chose the loaded words "good and bad" because
  they represent the confusion between technical measures and moral
  measures and what I see as Bennett's tendency to elide the two.



  *Still it is disingenuous to point to discussions that did
  involve commercial agendas and us them to "prove" that all
  decisions were primarily political.*

Again, none of this is that cut and dried. Not everyone was taking their position for the same reason, this is more probabilistic than deterministic. And even if they believed that their positions were technically correct. that was probably why they had been put in the meeting.



  This tendency to "prove by example" reflects the larger
  difficulty we have in coming to terms with science as simply a
  method in which we test (rather than just defend) ideas without
  regards to their higher morality. Carriers "prove" they are right
  because that's way they have always done it - despite the obvious
  counter-example of the connectionless Internet itself.

Be careful here, Bob, your counter example (as is the carriers) is an argument that economic success is equivalent to technical success/rightness. This is the same argument that DOS is the greatest OS ever built. It has never bothered me that the outside world saw the Internet as a rousing success. That gave the rest of us time to fix the problems we knew we hadn't had time to address. The real danger is to drink our own kool-aid. Imagine the Internet without Moore's Law, where would be then? I doubt that carriers care one wit about proving they are right. They more likely care about whether they are profitable or not. Business models are neither right nor wrong, only more or less profitable. They might be more interested if you gave them an argument that showed them they would be more profitable pursuing a different model. They certainly aren't going to flock to your door for a model that makes their profits less. If there is a failure of the Internet, it is in not creating competitive environment for all of the players, not just those at the top. Datagrams were the answer of that I am pretty sure, but they were only part of it. Remember, the idea got very little exploration before we became entrenched on the pure form. In the late 70s there was still much talk about the nature of the datagram networks within the research community. However by the early 80s such topics were no longer being looked at and the bunker mentality had settleed in.
Take care,
John



  *From:* David Farber [mailto:dave () farber net]
  *Sent:* Tuesday, November 04, 2008 07:07
  *To:* ip
  *Subject:* [IP] Re: the undead urban myth of the LOC/EID split
  NOT AN EASY READ







  Begin forwarded message:



  *From:* Richard Bennett <richard () bennett com
  <mailto:richard () bennett com>>

  *Date:* November 4, 2008 6:26:28 AM EST

  *To:* dave () farber net <mailto:dave () farber net>, ip
  <ip () v2 listbox com <mailto:ip () v2 listbox com>>

  *Subject: Re: [IP] the undead urban myth of the LOC/EID split NOT
  AN EASY READ*



  David Reed, that's an incredibly narrow reading of my earlier
  comments, and quite bizarre. My point was that network
  standards-making - a process that I've been intimately involved
  in for some 25 years, right up to the present - is a political
  exercise in the larger sense of building consensus and exercising
  persuasion, not in the narrow sense of party affiliation or
  loyalty. Day's book documents several important instances of
  sub-standard protocols and design elements being chosen over
  superior ones because it was impossible to move the consensus
  along toward the better conclusion in the time at hand. I've
  certainly seen plenty of battles between one faction and another
  that were motivated by hidden corporate or anti-corporate
  agendas, most recently in a battle between a Motorola-sponsored
  faction with a UWB proposal to push against another faction
  sponsored by Intel and TI with a different proposal. The result
  in that case was gridlock. In your days in networking, back in
  the late 70s and early 80s, there were legendary battles between
  so-called "Bellheads" and "Netheads", and many solutions were
  adopted because of their position on a spectrum that ran from one
  pole to the other. And we certainly see the same dynamic today in
  the net neutrality drama where management techniques are decried
  as monopolistic or improper simply because they're favored or
  used by ISPs.


  Engineers are no more immune to the forces that drive politics
  (loosely-defined) in every sphere of human activity: university
  politics and corporate politics are instances of a human desire
  to maintain membership in a group and to achieve power and status
  by articulating positions popular with the group.Network
  protocols only have value if they're adopted by a large group of
  users and vendors, so the process of generating and standardizing
  them is inherently political on this large sense of the word. And
  just as we often see a dark side of human nature expressed in
  pandering to base emotions in the electoral process, we see
  pandering in the policy debates around network protocols and
  operation. It's become fashionable to decry all uses of DPI, for
  example, because the very name invokes fear and uncertainty. But
  all technologies have both legitimate an illegitimate uses, so
  this broad-brush condemnation is nothing more or less than
  pandering, the lowest form of politics.

  David Farber wrote:




      Begin forwarded message:



From: "David P. Reed" <dpreed () reed com <mailto:dpreed () reed com >>

      Date: November 3, 2008 8:18:05 PM EST

      To: dave () farber net <mailto:dave () farber net>

      Cc: ip <ip () v2 listbox com <mailto:ip () v2 listbox com>>

      Subject: Re: [IP] Re:   the undead urban myth of the LOC/EID
      split NOT AN EASY READ



      Wait just one minute.   Bennett below says that it is a "fact
      that network architecture is as much a political exercise as
      a technical one, and always has been."

      Oh - I guess all it takes to makes something a "fact" is the
      loud assertion by a partisan of "facthood".   So if I were to
      say that it is a "fact that God created the earth in 7 days",
      I could get away with that?



      John Day's book, it should be noted, is not a study in
      history with footnotes to primary sources, nor is it a
      French-style deconstruction of the texts a la Foucault.  John
      makes no claims as to "facts" about network architecture
      always being a political exercise.



      Those of us who actually had a role in designing the Internet
      protocols have widely varying political views.  It was NOT a
      political exercise, anymore than the design of the AT&T Bell
      Systems Architecture was a political exercise.



      The design of a system might have political *effects*, but I
      can guarantee all IP readers that there was and remains
      little that is "political" in maintaining the Internet to be
      highly *flexible* and *evolvable* over time, which were the
      original design points.



      (I personally have been called both a communist and a
      libertarian based on the same set of facts - my contributions
      to part of the Internet design.  I am neither - my politics
      are probably closest to those of Ralph Waldo Emerson, if
      anyone cares).



      Making the design of the Internet political is an agenda that
      only lunatics like Mr. Bennett see, from their paranoid world
      view.



      David Farber wrote:




          Begin forwarded message:



          From: Richard Bennett <richard () bennett com
          <mailto:richard () bennett com>>

          Date: November 3, 2008 6:33:21 PM EST

          To: dave () farber net <mailto:dave () farber net>

          Subject: Re: [IP] the undead urban myth of the LOC/EID
          split NOT AN EASY READ



          Dave -



          Feel free to share this with IP if you wish.



          I read John's book this weekend, in electronic form from
          the Santa Clara Country Library in Silicon Valley. Having
          read most of the books ever written on the Internet, both
          of the technical variety and the public policy primers,
          and having been involved in protocol standards from the
          1980s to the present, I feel I can say with reasonable
          confidence that "Patterns in Network Architecture" is the
          most important book on network protocols in general and
          the Internet in particular ever written. As the passage
          below indicates, it's not easy going for the
          non-technical crowd, who will certainly find much of the
          discussion excessively detailed. But John places the
          protocols in their proper socio-historical context for
          the first time. Readers, even the uninitiated, should
          take away from the book an appreciation for the fact that
          network architecture is as much a political exercise as a
          technical one, and always has been.





          At a time when public policy makers are literally
          inundated with opinion about the Internet's design and
          social implications, it's important to peel away the
          metaphors and  analogies and take a look at how it really
          works, what it does, what it doesn't do, what it could do
          a lot better, and how it got the way it is. John Day
          blazes a trail to that kind of understanding. It's an
          excellent book, even though I may disagree with some of
          his analysis of the Early Wittgenstein and a few other
          things.



          Regarding the discussion below, it may be easier to
          follow if we take the example of multi-homing or mobility
          and trace it through IP address assignment, path
          discovery, and transit, contrasting what we'd like to see
          with what we do see. In the present incarnation, we see
          the problem begins with IP address assignment to a MAC
          address, continues with DNS pointing to a location,
          continues with BGP advertising a route to a location, and
          ends with some sort of re-direction. That's IP. In XNS,
          the process is a bit different, and that difference
          highlights the problem with IPv4 that is only exacerbated
          in IPv6.



          RB



          David Farber wrote:




              Begin forwarded message:



              From: John Day <jeanjour () comcast net
              <mailto:jeanjour () comcast net>>

              Date: November 3, 2008 10:13:04 AM EST

              To: David Farber <dave () farber net
              <mailto:dave () farber net>>, Jonathan Smith
              <jms () cis upenn edu <mailto:jms () cis upenn edu>>

              Cc: day () bu edu <mailto:day () bu edu>, David Meyer
              <dmm () 1-4-5 net <mailto:dmm () 1-4-5 net>>

              Subject: Re: [IP] Re:   the undead urban myth of the
              LOC/EID split



              Possibly for the IP list.  O'Dell thinks this is too
              much an "inside" account for the list.  I will let
              you be the judge.  It is not an easy topic.  There is
              no simple explanation, especially between loc/id
              split and POA/node.



              Would appreciate your thoughts.



              John



              Let me try to explain the addressing problem.  I
              thought all of this was common knowledge at least
              among the old timers.



              We first realized we had a problem with naming and
              addressing in the ARPANET in 1972, when Tinker AFB
              joined the Net. They wanted redundant IMP
              connections.  I remember Grossman coming in one
              morning and telling me this.  My first thought was,
              "Right, good idea!", and 2 seconds later, "O, *&@##,
              that isn't going to work!"



              Host addresses were IMP port numbers, so with 2
              interfaces on 2 different IMPs, Tinker would look
              like 2 hosts to the network, not one.  Tinker's host
              would know it had two connections, but the network
              would think it was one connection to two different
              hosts. This is, of course, the multihoming problem.



              Had we blown it?  No, there were a lot of things we
              didn't do in that first attempt! We had a lot more
              important problems on our plate.  In those days, just
              moving data between very different computers was a
              major accomplishment.  We knew the naming stuff was
              hard and this was an experiment.  We could deal with
that later. yea, yea, I know. Famous last words! ;-)



              But the answer was obvious.  We were all OS guys. We
              had seen this problem before.  We needed a logical
              address space over the physical address space.  And
              we also knew that we need application names as well.
               Just as OSs require a 3 levels of names, networks
              would too. This well-known socket business we had
              done was just a kludge so we could demonstrate first
              3 applications we had up and running. Multihoming was
              a symptom of a much more fundamental missing piece of
              the overall design.  But we would get to it sooner or
              later.  (right, more famous last words.)



              It didn't seem like a big deal. Certainly not enough
              to bother writing a paper on it. For some reason, it
              took 10 years before Jerry Saltzer wrote it up and
              published it, later circulated as RFC 1498. Jerry got
              it right except for one little piece, which hadn't
              happened yet.  He describes, three levels of names
              for different things at different layers in a network
              architecture.



              Application names, which are location independent.

              Node addresses, which are location dependent

              Point of attachment addresses, (POA) which may or may
              not be location dependent and



              mappings between them.



              In general, the scope of the layers increases as you
              go up.



              (Draw a picture or see the figures in my book. It
              will be easier to visualize what is coming.  Don't
label the layers, we don't care what they are called.)



              We have called the function that maps between
              Application names and node addresses, a directory
              function.  (Not to be confused with X.500.  The
              terminology was in use a decade or more before that.)



              The mapping of node to POA is generally part of
              routing.  In this scheme routes are sequences of node
              addresses.  This we had understood since 1972.  I say
              "we" meaning people I worked around. Clearly not
              everyone did.  This is what you get for assuming it
              is obvious.  ;-)  (BTW, for the curmudgeons I am not
              claiming I came up with this before Saltzer.  Quite
              the opposite, I am claiming that several of us saw
              the broad outlines of what was needed.  It took
              Saltzer to make it concrete.  Although I wish he had
              been a little more concrete about what a POA and node
              address were.)



              So the problem with the ARPANET/Internet is that we
              name the Point of Attachment (twice), but nothing
              else.  Why twice?  The MAC address does the same
              thing.  They both name the interface between the wire
              and the system.  Until CIDR it was no harder to route
              on MAC addresses as IP addresses, since they weren't
              addresses anyway, i.e. they weren't
              location-dependent. While we have something that is
              sort of an application name in URLs, it isn't really.
               There is too much "path" in a URL to be an
              application name.  (More on this later.)



              Around this time, we learned a few other things about
              the problem:



              1)  Addresses only had to be unambiguous within the
              scope of the layer in which they were used.

              2)  Naming the host was irrelevant to the naming and
              addressing problem as far as communications was
              concerned.  A host name might be useful for network
              management problems but it was merely coincidental to
              the communications problem.  For communications, one
              is at least naming the protocol state machine.
              Thinking of it as a host name implied constraints
              that would only get in the way.

              3)  Embedding a lower layer address in a higher layer
              address made it route dependent, which is what we
              needed to avoid (see below).



              Many of us had always known that the ARPANET/Internet
              was incomplete. We didn't fix it with IPv4 because (I
              think) we felt that we didn't really have enough
              understanding of the whole naming and addressing
              problem yet (this was 1976 or so) and we didn't want
              to fix it the wrong way.  Any way this was still
              mostly an experimental network. It wasn't meant to be
              in production.  We could do that later.



              This is why, starting around 1980 the small group in
              OSI who was doing connectionless insisted the network
              layer would name the node. It wasn't a phone company
              thing (clearly not!!), it was fixing something from
              the early ARPANet, that we had not had an opportunity
              to fix yet.  Mostly it was Internet people who
              understood and pushed it in OSI, not the Europeans.
               Several European positions wanted OSI to have
              well-known sockets and name the interface.  I made
              sure it didn't creep into the Reference Model and
              Lyman, Oran, Piscatello, etc. made sure it wasn't in
              the protocol.



              This, of course, was all thrown out the window by the
              IPng process, which insisted that we go ahead with
              half a naming and addressing architecture. (At the
              time, I don't think there were 2 dozen people in the
              IETF who understood naming and addressing. The
              failure of a University education.)  I have never
              understood the IETF's reaction to these things.
              Rather than "you blew it let us show you how to do
              that right,"  Their reaction has been if They did it,
              we won't, even if it means cutting off your nose to
              spite your face.  The sociologists will probably
              explain it to us some day.



              Once it was decided that the IPng would name the
              interface, we were pretty well stuck. On the road to
              where we are today.  Not to put words in O'Dell's
              mouth, but I always thought 8+8 was an attempt at
              some sort of fix, even if it was a kludge, given that
              they wouldn't do it right and perhaps later we could
              move that closer to right. However, they wouldn't
              even do 8+8.





              The early drafts of the OSI Model also made the error
              of building the (N)-address from the (N-1)-address,
              like embedding MAC addresses in v6.   (This is one of
              those things that looks obvious on the surface and
              when you get into it, you realize is just plain
              wrong, a bit like Aristotelian physics:  Seems like
              common sense until you test it.) We uncovered that
              problem around 82 doing the Naming and Addressing
              Addendum to the RM and fixed it.  Why this is a
              problem in networks and not in OSs is also in the
              book.  Suffice it to say here that this makes the
              address into a *pathname* through the stack.  Path
              dependent just at the point it shouldn't be. Makes it
              into naming the interface even if you thought you
              weren't.  (Now some of you will say, but I don't have
              to interpret it that way. It still will name the
              node.  Correct.  If *everyone* obeys the rules.  But
              some hot-shot is going to assume he knows better and
              then complain like hell when his thing doesn't work
              somewhere.  Best way to keep them honest is not let
              them be dishonest.)



              The one thing you don't want in a network.  It works
              in an OS because there is only one way to get
              anywhere.  But in a network (even in a network stack)
              there may be more than one way to get some where.  So
              addresses in different layers have to be completely
              independent to preserve path independence.  Which
              brings us to the piece that was missing in Saltzer's
              analysis:



              The missing piece that hadn't happened when Saltzer
              wrote was multi-path routing:  More than one path to
              the next hop.  This turns out to be one of those
              little things that opens up considerable insight.  If
              we include this in his model.  Then we need the node
              to POA mapping for all NEAREST neighbors.  So
              calculating a route is *logically*:  Calculate the
              route to the destination using the routing table
              information, Find the next hop, then choose which
              path to get to the next hop.



              Clearly you don't build it this way.  You create a
              forwarding table and use it the way you do now.
               Although, there is no reason one might not do a
              forwarding table update that just change the node to
              POA mapping without recalculating routes.



              But what is interesting is that this mapping (node to
              POA of nearest neighbors) is exactly the same as the
              application name to node address mapping, i.e. the
              directory.  Those are all *nearest neighbors* at that
              layer too!  The whole structure is relative.  One
              layer's node address is the point of attachment for
              the layer above. And it repeats. (That is what AS
              numbers were trying to tell you.) Although not
              necessarily in the obvious way.



              With a structure like this, mobility is nothing more
              than dynamic multihoming.  And several other things
              fall out easily, again see the book.





              So here we are.  15 years after IPng and v6 doesn't
              solve any of these problems. No surprise.  It was
purposely designed not to solve any of these problems.



              Some have noted that the IPv6 group thought this was
              just a data plane problem and ignored the so-called
              control plane. (Sorry, but I balk at the use of this
              phone company terminology, it confusing issues.)
               What sheer incompetence!  As Radia points out in the
              2nd Edition of her book, if you don't like NATs, you
              should have adopted CLNP.  It was already in the
              routers.  In other words, we could have spent the
              last 15 years on transition instead of on a
              monumental waste of money, time, and effort.  Anyone
              who tried to explain these problems to the IPv6 group
              were simply labeled as sore losers.



              Throughout the late 80s and 90s, if there was
              discussion of addressing someone (usually from MIT)
              would say, you have to read Saltzer's paper.  During
              the NSRG meetings in 2001-2 it was brought up
              frequently.  Then suddenly it was dropped.  Never
              mentioned.  When I pressed Noel on it not long ago,
              he said "they had moved beyond it."  Seemed strange
              since loc/id was clearly not an answer, not a step to
              a solution.  At least Saltzer looked at the whole
              architecture, while loc/id only looked at
              Network/Transport.



              It begins to seem that Loc/id split had been invented
              so they don't have to admit they were wrong and
              simply name the node and get on with it. They seem to
              have an inkling that they had missed something
              important with v6 and they were desperately trying to
              find a way to retrofit it before it was too late. The
              trouble is loc/id split isn't the whole problem.
               Loc/id split (as near as I can tell) still does not
              name the node, but some application-flow-endpoint.
               Whatever it is a node address is necessary and it
              will need to be location-dependent and aggregateable
              and it isn't.





              So what is really wrong with loc/id split.  Lets look
              at it.  If the IP address (the loc) remains a POA on
              which we do routing and giving them the benefit of
              the doubt, the id is a node address (in some papers
              the "id" seems to be more an
              application-connection-endpoint or something
              similar), then the loc is the provider dependent
              identifier and the id is the provider independent
              name.  But it is flat. If multihoming is widespread
              it is likely that several endsystems in the same area
              will be using the same different providers for
              multihoming.  Aren't the routers going to want to be
              able to aggregate the look ups for these to figure
              out where to send them?  Not if the id is based on a
              flat name.  Remember the relation of POA and node is
              relative.  What is needed for one is going to be true
              for the other.  Using a flat id assumes that it won't
              be needed much.  But what we are seeing is that
              multihoming is becoming very widespread and I don't
              think we have seen anything near the end of it.  The
              thing is that the node address (id) must be
              aggregateable as well. In any case, to build in an
              identifier at this level that does not facilitate
scaling seems as short-sighted as v6 was to begin with.



              But now is it too late.? At least for IPv6.  The
              Internet architecture has been fundamentally flawed
              from the beginning.  To be fair, it is a demo that
              never got finished.  Basically this is like trying to
              build an OS for a huge set of applications with no
              virtual address space or application name space.  Or
              as I say in the book, what we have is DOS, what we
              need is Multics, but we would settle for UNIX.  The
              Internet architecture is equivalent to DOS.





              I hope this helps.  The medium makes it a bit hard to
              explain.



              Take care,

              John









              -------------------------------------------

              Archives: https://www.listbox.com/member/archive/247/=now

              RSS Feed: https://www.listbox.com/member/archive/rss/247/

              Powered by Listbox: http://www.listbox.com











      -------------------------------------------

      Archives: https://www.listbox.com/member/archive/247/=now

      RSS Feed: https://www.listbox.com/member/archive/rss/247/

      Powered by Listbox: http://www.listbox.com





------------------------------------------------------------------------

Archives <https://www.listbox.com/member/archive/247/=now> <https://www.listbox.com/member/archive/rss/247/ >

  <http://www.listbox.com>





--
Richard Bennett

--
Richard Bennett

------------------------------------------------------------------------
Archives <https://www.listbox.com/member/archive/247/=now> <https://www.listbox.com/member/archive/rss/247/ > [Powered by Listbox] <http://www.listbox.com>





-------------------------------------------
Archives: https://www.listbox.com/member/archive/247/=now
RSS Feed: https://www.listbox.com/member/archive/rss/247/
Powered by Listbox: http://www.listbox.com





-------------------------------------------
Archives: https://www.listbox.com/member/archive/247/=now
RSS Feed: https://www.listbox.com/member/archive/rss/247/
Powered by Listbox: http://www.listbox.com


Current thread: