Interesting People mailing list archives

ISPs helping with botnets


From: Dave Farber <dave () farber net>
Date: Mon, 12 Oct 2009 14:06:39 -0400





Begin forwarded message:

From: Jonathan Zittrain <zittrain () law harvard edu>
Date: October 12, 2009 13:17:45 EDT
To: dave () farber net
Subject: ISPs helping with botnets


Dear Dave,

For IP, if you think it's helpful --

I'm finding myself cheering Comcast on in its attempts to screen its own subscribers' machines for malware-generating activity. There may well be issues with tactics -- there are good and bad ways to do this, both technically (how best to detect; how to screen in ways that don't compromise privacy) and procedurally (e.g. how to notify users that they appear to be actively hosting malware; what grace period, if any, to give them before cleaning up; how to handle appeals; (again) how to screen in ways that don't compromise privacy). But the concept is, I think, sound (or more accurately, least worst). I wrote about this topic about two years ago at <http://yupnet.org/zittrain/archives/18#36 >, worrying that sometimes e2e theory -- which I love -- can, ironically, conflict with a higher-order goal of ensuring a generative network: one where anyone can run code without interference from gatekeepers. Especially as we move towards cloud computing -- typically, code executed only with the ongoing permission of a vendor -- or "tethered devices," where we may hold the computing object in hand but it's controlled from afar by a vendor -- we must ensure that the more traditional PC environment doesn't drive its users away.

Excerpts follow; foonotes inline at <http://yupnet.org/zittrain/archives/18#36 >:

Apart from hardware and software makers, there is another set of technology providers that reasonably could be asked or required to help: Internet Service Providers. So far, like PC, OS, and software makers, ISPs have been on the sidelines regarding network security. The justification for this -- apart from the mere explanation that ISPs are predictably and rationally lazy -- is that the Internet was rightly designed to be a dumb network, with most of its features and complications pushed to the endpoints. The Internet’s engineers embr aced the simplicity of the end-to-end principle (and its companion, the procrastination principle) for good reasons. It makes the networ k more flexible, and it puts designers in a mindset of making the sy stem work rather than anticipating every possible thing that could g o wrong and trying to design around or for those things from the out set. Since this early architectural decision, “keep the Internet fre e” advocates have advanced the notion of end-to-end neutrality as an ethical ideal, one that leaves the Internet without filtering by an y of its intermediaries. This use of end-to-end says that packets sh ould be routed between the sender and the recipient without anyone s topping them on the way to ask what they contain. Cyberlaw scholars have taken up end-to-end as a battle cry for Internet freedom, invok ing it to buttress arguments about the ideological impropriety of fi ltering Internet traffic or favoring some types or sources of traffi c over others.

These arguments are powerful, and end-to-end neutrality in both its technical and political incarnations has been a crucial touchstone for Internet development. But it has its limits. End-to-end does not fully capture the overall project of maintaining openness to contribution from unexpected and unaccredited sources. [...]

According to end-to-end theory, placing control and intelligence at the edges of a network maximizes not just network flexibility, but also user choice. The political implication of this view that end-to- end design preserves users’ freedom, because the users can configure their own machines however they like depends on an increasingly unr eliable assumption: whoever runs a machine at a given network endpoi nt can readily choose how the machine will work. To see this presump tion in action, consider that in response to a network teeming with viruses and spam, network engineers recommend more bandwidth (so the transmission of “deadweights” like viruses and spam does not slow down the much smaller proportion of legitimate mail being carri ed by the network) and better protection at user endpoints, rather t han interventions by ISPs closer to the middle of the network. But u sers are not well positioned to painstakingly maintain their machine s against attack, leading them to prefer locked-down PCs [or cloud c omputing], which carry far worse, if different, problems. Those who favor end-to-end principles because an open network enables generati vity should realize that intentional inaction at the network level m ay be self-defeating, because consumers may demand locked-down endpo int environments that promise security and stability with minimum us er upkeep. This is a problem for the power user and consumer alike.

The answer of end-to-end theory to threats to our endpoints is to have them be more discerning, transforming them into digital gated communities that must frisk traffic arriving from the outside. The frisking is accomplished either by letting next to nothing through -- as is the case with highly controlled information appliances -- or by having third-party antivirus firms perform monitoring, as is done with increasingly locked-down PCs. Gated communities offer a modicum of safety and stability to residents as well as a manager to complain to when something goes wrong. But from a generative standpoint, these moated paradises can become prisons. Their confinement is less than obvious, because what they block is not escape but generative possibility: the ability of outsiders to offer code and services to users, and the corresponding opportunity of users and producers to influence the future without a regulator’s pe rmission. When endpoints are locked down, and producers are unable t o deliver innovative products directly to users, openness in the mid dle of the network becomes meaningless. Open highways do not mean fr eedom when they are so dangerous that one never ventures from the ho use.

Some may cling to a categorical end-to-end approach; doubtlessly, even in a world of locked-down PCs there will remain old-fashioned generative PCs for professional technical audiences to use. But this view is too narrow. We ought to see the possibilities and benefits of PC generativity made available to everyone, including the millions of people who give no thought to future uses when they obtain PCs, and end up delighted at the new uses to which they can put their machines. And without this ready market, those professional developers would have far more obstacles to reaching critical mass with their creations.

Strict loyalty to end-to-end neutrality should give way to a new generativity principle, a rule that asks that any modifications to the Internet’s design or to the behavior of ISPs be made where they will do the least harm to generative possibilities. Under such a pri nciple, for example, it may be preferable in the medium term to scre en out viruses through ISP-operated network gateways rather than thr ough constantly updated PCs. Although such network screening theoret ically opens the door to additional filtering that may be undesirabl e, this speculative risk should be balanced against the very real th reats to generativity inherent in PCs operated as services rather th an products. Moreover, if the endpoints remain free as the network b ecomes slightly more ordered, they remain as safety valves should ne twork filtering begin to block more than bad code.

In the meantime, ISPs are in a good position to help in a way that falls short of undesirable perfect enforcement, and that provides a stopgap while we develop the kinds of community-based tools that can facilitate salutary endpoint screening. There are said to be tens of thousands of PCs converted to zombies daily, and an ISP can sometimes readily detect the digital behavior of a zombie when it starts sending thousands of spam messages or rapidly probes a sequence of Internet addresses looking for yet more vulnerable PCs. Yet ISPs currently have little incentive to deal with this problem. To do so creates a two-stage customer service nightmare. If the ISP quarantines an infected machine until it has been recovered from zombie-hood cutting it off from the network in the process the user might claim that she is not getting the network access she paid for. And quarantined users will have to be instructed how to clean their machines, which is a complicated business. This explains why ISPs generally do not care to act when they learn that they host badware- infected Web sites or consumer PCs that are part of a botnet.

Whether through new industry best practices or through a rearrangement of liability motivating ISPs to take action in particularly flagrant and egregious zombie situations, we can buy another measure of time in the continuing security game of cat and mouse. Security in a generative system is something never fully put to rest it is not as if the “right” design will forestall security problems forevermore. The only way for such a design to be foolproof is for it to be nongenerative, locking down a computer the same way that a bank would fully secure a vault by neither letting any customers in nor letting any money out. Security of a generative system requires the continuing ingenuity of a few experts who want it to work well, and the broader participation of others with the go odwill to outweigh the actions of a minority determined to abuse it.

A generativity principle suggests additional ways in which we might redraw the map of cyberspace. First, we must bridge the divide between those concerned with network connectivity and protocols and those concerned with PC design -- a divide that end-to-end neutrality unfortunately encourages. Such modularity in stakeholder competence and purview was originally a useful and natural extension of the Internet’s architecture. It meant that network experts did no t have to be PC experts, and vice versa. But this division of respon sibilities, which works so well for technical design, is crippling o ur ability to think through the trajectory of applied information te chnology. Now that the PC and the Internet are so inextricably inter twined, it is not enough for network engineers to worry only about n etwork openness and assume that the endpoints can take care of thems elves. It is abundantly clear that many endpoints cannot. The procra stination principle has its limits: once a problem has materialized, the question is how best to deal with it, with options ranging from further procrastination to effecting changes in the way the network or the endpoints behave. Changes to the network should not be categ orically off the table.

Second, we need to rethink our vision of the network itself. “Middle ” and “endpoint” are no longer subtle enough to capture the important emerging features of the Internet/PC landscape. It remains correct that, from a network standpoint, protocol designs and the I SPs that implement them are the “middle” of the network, as distinct from PCs that are “endpoints.” But the true import of this vernacular of “middle” and “endpoint” for policy purposes has lost its usefulness in a climate in which computing env ironments are becoming services, either because individuals no longe r have the power to exercise meaningful control over their PC endpoi nts, or because their computing activities are hosted elsewhere on t he network, thanks to “Web services.” By ceding decision-making control to government, to a Web 2.0 service, to a corporate authorit y such as an OS maker, or to a handful of security vendors, individu als permit their PCs to be driven by an entity in the middle of the network, causing their identities as endpoints to diminish. The resu lting picture is one in which there is no longer such a clean separa tion between “middle” and “endpoint.” In some places, the labels have begun to reverse.

Abandoning the end-to-end debate’s divide between “middle” and “endpoint” will enable us to better identify and respond to threats to the Internet’s generativity. In the first instance, this might mean asking that ISPs play a real role in halting the spread o f viruses and the remote use of hijacked machines.


Jonathan Zittrain
Professor of Law
Harvard Law School | Harvard Kennedy School of Government
Co-Founder, Berkman Center for Internet & Society
<http://cyber.law.harvard.edu>




-------------------------------------------
Archives: https://www.listbox.com/member/archive/247/=now
RSS Feed: https://www.listbox.com/member/archive/rss/247/
Powered by Listbox: http://www.listbox.com

Current thread: