Interesting People mailing list archives

Fixing the Internet might break it worse than it's broken now


From: David Farber <dave () farber net>
Date: Tue, 17 Feb 2009 19:43:33 -0500



Begin forwarded message:

From: Jonathan Zittrain <zittrain () law harvard edu>
Date: February 17, 2009 9:08:28 AM EST
To: David Farber <dave () farber net>
Subject: Re: Fixing the Internet might break it worse than it's broken now


Hi IP'ers and all,

Suppose that we agree on a rough (to some, controversial) value judgment: the Internet's architectural openness (its "generativity") -- and its progression into the mainstream -- has been a genuinely awesome thing, facilitating radical (and mostly good) revolutions in how we express and entertain ourselves, how we learn, how we shop, essentially in how meaning is made.

Then: is there a signal threat to it apart from the ones arising from people (and regulators) who reject or are harmed by the Net's openness even when it's functioning as designed? I.e., apart from those who don't share the value judgment about openness?

I gather that some say no. David Akin and David Isenberg, and perhaps Gene S. (although he sort of seems to say "a pox on both your houses"), say that for all its vulnerabilities, the Internet manages to keep on ticking, and suggestions that there is a growing -- perhaps existential -- threat to its functioning arising from anti-libertarian control freaks and mercenary security vendors -- those who benefit from rejecting its generative premise rather than those who want to save it.

I say yes. It's an tough empirical question and there is plenty of room for disagreement -- much of this is crystal ball gazing -- but it clouds the ball further to argue that anyone who tries to describe the threat is only doing so because he or she seeks lockdown. I worry both about the problem that will, if no better alternatives are offered, drive people away from open systems, and life in the gated communities that will welcome them.

So what's the problem? As Gene says, the issue is not only with networks that are not secure, but also the endpoints: reprogrammable machines, PCs, that provide the basis for the botnets that can wreak various forms of havoc. It's a miracle and an absurdity that infused in homes, workplaces, and laps around the world are PCs that can be repurposed in an instant, running code from the other side of the world without the vendor of the machine or its operating system, or the network service provider, having anything to say about it. That's how an innovation like Skype -- or, for that matter, a Web browser -- can come about and hit prime time. It's also how worms and viruses spread, and it's not just about OS bugs: many of these come in through the front door, with the user choosing to run new code without understanding what's hidden within it. I remember Microsoft's "first immutable law of security":

There's a nice analogy between running a program and eating a sandwich. If a stranger walked up to you and handed you a sandwich, would you eat it? Probably not. How about if your best friend gave you a sandwich? Maybe you would, maybe you wouldn't -- it depends on whether she made it or found it lying in the street. Apply the same critical thought to a program that you would to a sandwich, and you'll usually be safe." <http://www.microsoft.com/technet/archive/community/columns/security/essays/10imlaws.mspx >

This is well intentioned, of course -- we know what the author is trying to say by it -- but it's also crazy. Millions of years of evolution have helped us intuitively discern a good sandwich from a rotten one, and we don't continually ingest little bits of food every few minutes as we walk down the street. There's no such help with code. That's why for 99.9% of the people out there, the idea of merrily running any code they see is already a fiction. (Most of the . 1% are people who just don't care if their PCs melt, rather than geeks who know how to secure them.) People turn to anti-virus vendors, firewall makers, and all the other patchy tech that Gene rightly dismisses as baling wire and twine. If that's all they've got, people will be ripe for persuasion that they should lock themselves down more, opting for sterile environments like the Kindle for more and more tasks, or hybrid environments like that of the iPhone or Facebook Apps: outside code can run, but only with the prospective and ongoing permission of the platform operator. These are attractive solutions -- I love my iPhone -- but they are worrisome in the big picture, especially as the model for them begins to predominate across all software. Already, many of the otherwise-generative machines out there are being locked down by the boxes' actual owners: PCs in corporate environments, schools, cyber cafes, and libraries are frequently unable to run new code without bureaucratized approval. And in the developing world, much of the excitement around the adoption of mobile platforms instead of clunky PCs tends, with a few notable exceptions, to play into the walled gardens. Where demand goes, supply follows: for the next generation of geeks and tinkerers, many find these walled gardens to be an unremarkable feature of the landscape. Today's kidz are coding for Facebook and iPhone, not for GNU/Linux or Windows.

It's not much answer to say: "Well, *I* don't have problems with viruses; it's just losers who don't know how to protect their machines. Let them have a playpen, then." This response reminds me of the end of Atlas Shrugged, when the handful of good capitalists retreat to a golden valley and mow each others' lawns in a new economy, while the rest of the world melts. I don't want an Internet where only the nerds remain. (USENET was fun, but ...)

So, David's subject line sounds right to me: "Fixing the Internet might break it worse than it's broken now." But that doesn't mean that we should accept the status quo. If we do, we'll lose it -- or we'll find that we're one of a comparative handful clinging to it as everyone else migrates away.

What are the solutions that aren't iatrogenic? I'm less sanguine than many on this list that some sort of liability regime for buggy code is the way to go, both because I think it will in many cases lead to less generative platforms and because the problem transcends mere bugs in code. (For a more detailed treatment of this, see <http://yupnet.org/zittrain/archives/18#29 >.) And "more training" for users would be great, but seems unrealistic. We need solutions that require only a critical mass of people to implement, rather than counting on lots and lots of people to suddenly become tinkerers themselves -- even as they rightly should enjoy the benefits of an experimentalist culture like that of the Internet and PC. My own ideas run less in the direction of re- architecting the entire Internet, though I'm intrigued by the Clean Slate project and its siblings, like that run by David Clark at MIT. David Isenberg is right that I've suggested some promise in virtual machine technology that allows promising but suspect code to run in a "red" zone, but this approach also has limits and drawbacks. (Who decides what's red and green when the users' cluelessness is what gives rise to the need for a red zone at all?) See, e.g., <http://yupnet.org/zittrain/archives/18#6 >.

Instead, I think that collecting and making available more data about the shape of the problem can help enormously. We really don't know what's going on out there, and the sooner we can replace speculation with reality -- and not have what little we know be a trade secret! -- the better. See <http://yupnet.org/zittrain/archives/18#48> for more details on how this could work:

Social problems can be met first with social solutions - aided by powerful technical tools - rather than by resorting to law. As we have seen, vandalism, copyright infringement, and lies on Wikipedia are typically solved not by declaring that vandals are breaking laws against “exceeding authorized access” to Wikipedia or by suits for infringement or defamation, but rather through a community process that, astoundingly, has impact.

The Google/Stopbadware partnership -- which made news a few weeks ago for reasons unrelated to its core operations -- is one experiment in this area. I'm all for the Net solving its own problems -- someone does always tend to step up. (E.g., thanks, Luis von Ahn, for the CAPTCHA!) Maybe that someone is among us?

There, now, I've gone ahead and ended with the thought that we are the change we've been waiting for. Or is it Ready to Lead? ...JZ


Jonathan Zittrain
Professor of Law
Harvard Law School | Harvard Kennedy School of Government
Co-Founder, Berkman Center for Internet & Society
<http://cyber.law.harvard.edu>




-------------------------------------------
Archives: https://www.listbox.com/member/archive/247/=now
RSS Feed: https://www.listbox.com/member/archive/rss/247/
Powered by Listbox: http://www.listbox.com


Current thread: