Interesting People mailing list archives

Re: GENI discussion


From: David Farber <dave () farber net>
Date: Tue, 22 May 2007 17:40:41 -0400



Begin forwarded message:

From: Bob Frankston <bob37-2 () bobf frankston com>
Date: May 22, 2007 4:50:28 PM EDT
To: dave () farber net, ip () v2 listbox com
Cc: "'Ellen Witt Zegura'" <ewz () cc gatech edu>, "'Scott Shenker'" <shenker () icsi berkeley edu>, "'Lazowska'" <lazowska () cs washington edu>, dpreed () reed com
Subject: RE: [IP] GENI discussion

I’ll admit that much of this is an expression of my own frustrations as I try to explain why Internet Inc is the wrong model just as I try to explain why broadband is the anti-Internet. I see GENI as a symptom of incrementalism even though it’s just a funding mechanism for research. I realize that it is not really supposed to be the next Internet but nonetheless it is filling that role by being a focus of Internet research. I don't blame the sponsors for the hype but it's still useful to take a look at it as a proxy for the idea that we should look at the network architecture rather than at how we use connectivity. I’m also biased towards solutions that empower the individual rather than institution be they corporations or governmental agencies.



If there indeed equivalent “from the edge” efforts I would like to know about them. For now I’m treating GENI is the face of Internet research whether or not it’s meant to be. It’s a useful foil for contrasting the network-centric approach to alternative approaches.



If GENI is indeed "a facility for experimenting with new Internet architectures" then the absence of the null case is telling. This is not necessarily the fault of those sponsoring GENI or other such efforts except as a reminder of the lack of funding for approaches that aren't as well-defined.



I've been arguing that the Internet has been defined by the end-to- end argument as a constraint -- that is, the inability to depend on any particular architecture. Attempts to improve the architecture with smart protocols like multi-casting and to extend it with V6 have been problematic while P2P approaches have met with wide spread adoption. These P2P efforts are the real inheritors of the Internet's End-to-End constraint.



What most concerns me is the seeming lack of this defining constraint in the research efforts. It leads to efforts to solve complex problems like phishing and scaling in sterile environments that can't exhibit these problems. There seems to be a presumption that issues like security can be addressed in a physical architecture of a network.



I cited the light switch/fixture problem because it seems so trivial yet demands a solution that does not depend on the network as a solution-provider. All we can presume is two consenting devices that might be able to exchange messages. To be more precise one can indeed assume a network of services but what makes this problem so interesting is that it arises because I have not permitted myself to take advantage of the overwhelming temptation to rely on network services and thus we can decouple the system elements. Such solutions are not only able to exhibit Moore's law effects they are also more stable.



My own "research" consisted of trying to do projects such as home control (and, while at Microsoft, working with Honeywell and Intel to (unintentionally) learn what doesn't work). It's the same kind of constraint that I used for home networking (no installers or service providers) which arose from my experience with personal computing. I remember efforts like Project Athena failing to "get" personal computing.



I don't think I'm too cynical to observe that well-defined experiments on a network test bed are very appealing to those who need to have fundable projects with deliverables. Congressional scrutiny of funding almost requires local justification of each step while efforts to improve connectivity at the edge are too-easily treated as threatening by those who associate P2P with a loss of control.



The question then is where is the effort to solve problems in making use of whatever transport is available to do our own networking and to create solutions at the edge. Problems like phishing cannot be solved inside the network nor can they be "solved" in a closed sense. I also argue that efforts to solve scaling problems in a test bed fall into what I consider the trap of trying to solve a problem instead of seeing the opportunity in solutions already available.



Compositing local connectivity when we can assume abundant capacity is very different from presuming scarcity with a single entity can manage the complex routing through chokepoints. Why aren't we asking why we can't take advantage of the full capacity of the available fiber (and other transports) rather accepting this synthetic scarcity and reveling in the complex algorithms necessary to work around it?



How do we support projects that aren't as amenable to closed solutions but are still vital? This is a nontrivial question and, perhaps, we have to rely on by-product of the effective marketplaces. This is why individuals can create P2P solutions without organizational funding but such funding helps in doing professional solutions as we’ve seen with Skype and from Firefox’s challenge in scaling the effort. That leaves a lot of problems, like phishing, seemingly orphaned because it’s not as amenable to solutions in isolation.





-----Original Message-----
From: David Farber [mailto:dave () farber net]
Sent: Tuesday, May 22, 2007 07:55
To: ip () v2 listbox com
Subject: [IP] GENI discussion







Begin forwarded message:



From: "Zegura, Ellen Witte" <ewz () cc gatech edu>

Date: May 22, 2007 3:33:53 AM EDT

To: ip () v2 listbox com

Cc: dave () farber net, Scott Shenker <shenker () icsi berkeley edu>, Ed

Lazowska <lazowska () cs washington edu>

Subject: GENI discussion



I’ve seen the IP thread about GENI that was prompted by yesterday’s

announcement that BBN will be the GENI Project Office (GPO).  Along

with Scott Shenker, I am helping lead the GENI Science Council (GSC),

the group that represents the research community that will make use

of the GENI facility.







I wanted to respond to a few parts of Bob’s message.  Primarily, I

wanted to make clear that GENI is a facility for experimenting with

new Internet architectures, not a proposal to BE the new

architecture.  The reason for slicing is to allow multiple

experiments to run on the same physical resources at the same time.

The slices are managed so that experimenters don’t step on one

another during the testing phase.  There are other funding programs

in NSF that will fund the research on alternative architectures and

ideas that can be tested in the GENI facility.  These include the

FIND program, and many other networking and distributed systems

programs.  Of course, one could propose that GENI itself be the new

architecture – and surely some will -- and then folks like Bob and

others could reasonably debate the wisdom of that.  We’re not there

yet, though, and its important to keep the distinction between GENI

as a facility and GENI as a proposed new architecture.







With respect to the suggestion that money would be better spent on

research that improves our understanding of how to operate well from

the edge, I guess I view that as complementary.  GENI is likely to

support experiments that modify edges and use a standard IPv4 or v6

core, perhaps with instrumentation inside the network that would

allow more insight into what helps edge-controlled apps work better.







Ellen















-------------------------------------------

Archives: http://v2.listbox.com/member/archive/247/=now

RSS Feed: http://v2.listbox.com/member/archive/rss/247/

Powered by Listbox: http://www.listbox.com



-------------------------------------------
Archives: http://v2.listbox.com/member/archive/247/=now
RSS Feed: http://v2.listbox.com/member/archive/rss/247/
Powered by Listbox: http://www.listbox.com


Current thread: