Interesting People mailing list archives

WORTH READING ISPs helping with botnets


From: David Farber <dave () farber net>
Date: Tue, 13 Oct 2009 16:13:57 -0400



Begin forwarded message:

From: "David P. Reed" <dpreed () reed com>
Date: October 13, 2009 7:40:36 AM EDT
To: David Farber <dave () farber net>
Subject: Re: [IP] ISPs helping with botnets

I'm disappointed with some of Jonathan Zittrain's reasoning in his post below. I'm sure he wrote quickly and had not thought clearly about some of the implications of having ISPs take on a unilateral role in stopping botnets. (as I explain below, there are some roles that ISPs already play - I do think they have a role).

The end-to-end argument to which my name is attached asks something different than what Jonathan says here: it asks "can the function be entirely be done 'in the network'", or must it be entirely done at the endpoints anyway. If it is a function defined by the endpoints, it is of the latter sort.

Jonathan uses some terminology I don't understand: "end-to-end theory". Perhaps that is a theory he invented? It seems like an unwarranted generalization: that every function should be done only at the endpoints. (it's clear, for example, that "*detection* of congestion" must be done in the network, since increased queueing delay is localized in the network elements, while "*remediation* of congestion" involves endpoints - backing off - traffic engineering - rerouting - and business - purchase/construction of more capacity). Instead, the end-to-end argument argues for putting those functions at the endpoints that can be done at the endpoints, where possible.

When I first questioned Comcast's proposal I did NOT use an end-to-end argument.

The answer for botnet problems is "botnet detection and remediation" is "neither" - and the reason is definitional. The problem lies in stating "the botnet problem" accurately and carefully. (my statement may be different from others).

The "botnet problem" is the problem of a collection of infected user machines carrying out malicious behaviors that are intolerable to most users, and create a public nuisance. How best to cope with them?

Clearly, this is complex. It cannot be viewed as a problem that involves communications intended between endpoints, nor is a problem of cooperative sharing among endpoints.

However, there ARE principles that we ought to apply about locating the function. I cited a few in my comments. A big principle is minimal invasion of privacy with transparency to users, and minimizing at all costs the impact of "errors" since botnet detection will have errors and impacts. (these are analogous to principles used to deal with undesirable social behavior in a free and just society).

There are both social/cultural reasons for such a principle, and also *technical* ones. The technical ones arise from the difficulty of "reading meaning into bits and patterns" when we have a diverse and expanding set of protocols in operation. Staring at a packet trace or staring into a packet of bits may give the illusion of meaning, but in fact the meaning is entirely contextual - and that context is changing. Here are some cases where this applies socially and culturally.

1) means of detection of "maliciousness". Like "harassment" this is *best* defined by the target, and adjudicated by a neutral party, lest the target be tempted to game the entire system. ISPs are rarely the targets, rarely have customers who are individualized targets, and are NOT neutral parties. In fact ISPs have been fighting lately for privileges to inspect traffic (see Scott Cleland's testimony at their behest - he claims that because Google tracks queries, that ISPs should be able to read all their traffic so they can compete with Google in capturing and selling marketing data). ISP executives have stated (I can find quotes) that they covet revenue streams that can be generated by selling end-user behavior (clickstreams, DNS queries, etc) to third party marketers. The fight for the "right" to inspect traffic is clearly part of the argument that ISPs are the right place to *detect* and *assess* maliciousness by reading bytes of traffic targeted at hosts not even on the ISP network.

2) reliability of claimed detections: Can one define maliciousness using only packet address traces or timing, or even content? You can't, or at least there are no scientific studies that provide a reasonably reliable distinction. Further, claimed detections are surmises, not based on actual validated studies, and the studies would not be current in any case: new applications come up every day.

3) authority to interfere with lawful activity: activities by users that are lawful and not malicious ought not to be suppressed, unless there is a reason to work fast. This is in common with the legal system's rules about subpeona's and arrests, etc; get a warrant first. Due process is important in a public utility that many people depend upon.

So what is the proper role of an ISP? Once detected and validated, a bot can be stopped by notification and blocking (subject to a reasonable, due process, appropriate to the need).

It is far less clear that an ISP ought to be a vehicle for detection and validation, without significant governmental supervision. Since most attacks are not on the ISP itself, it would seem that the proper role of an ISP is to accept well-documented, and independently verified claims of botnet operation, and help to remove them.

Some of the principles and comments above go far beyond an "end-to-end argument" - that type of argument is useful, but in designing something like the Internet, and even more so in designing the ways culture responds to the Internet's challenges, we need additional principles.

What is emerging under the term "net neutrality" is a set of such principles. One principle is becoming more and more true as protocols and applications diversify: that "reading traffic" is not a good way to guess the meaning of that traffic, and that building "justice" and "policing" on reading meaning into traffic is bad, both *technically* and culturally.





Begin forwarded message:

From: Jonathan Zittrain <zittrain () law harvard edu>
Date: October 12, 2009 13:17:45 EDT
To: dave () farber net
Subject: ISPs helping with botnets

Dear Dave,

For IP, if you think it's helpful --

I'm finding myself cheering Comcast on in its attempts to screen its own subscribers' machines for malware-generating activity. There may well be issues with tactics -- there are good and bad ways to do this, both technically (how best to detect; how to screen in ways that don't compromise privacy) and procedurally (e.g. how to notify users that they appear to be actively hosting malware; what grace period, if any, to give them before cleaning up; how to handle appeals; (again) how to screen in ways that don't compromise privacy). But the concept is, I think, sound (or more accurately, least worst). I wrote about this topic about two years ago at <http://yupnet.org/zittrain/archives/18#36>, worrying that sometimes e2e theory -- which I love -- can, ironically, conflict with a higher-order goal of ensuring a generative network: one where anyone can run code without interference from gatekeepers. Especially as we move towards cloud computing -- typically, code executed only with the ongoing permission of a vendor -- or "tethered devices," where we may hold the computing object in hand but it's controlled from afar by a vendor -- we must ensure that the more traditional PC environment doesn't drive its users away.

Excerpts follow; foonotes inline at <http://yupnet.org/zittrain/archives/18#36 >:

Apart from hardware and software makers, there is another set of technology providers that reasonably could be asked or required to help: Internet Service Providers. So far, like PC, OS, and software makers, ISPs have been on the sidelines regarding network security. The justification for this -- apart from the mere explanation that ISPs are predictably and rationally lazy -- is that the Internet was rightly designed to be a dumb network, with most of its features and complications pushed to the endpoints. The Internet’s engineers embraced the simplicity of the end-to-end principle (and its companion, the procrastination principle) for good reasons. It makes the network more flexible, and it puts designers in a mindset of making the system work rather than anticipating every possible thing that could go wrong and trying to design around or for those things from the outset. Since this early architectural decision, “keep the Internet free” advocates have advanced the notion of end- to-end neutrality as an ethical ideal, one that leaves the Internet without filtering by any of its intermediaries. This use of end-to- end says that packets should be routed between the sender and the recipient without anyone stopping them on the way to ask what they contain. Cyberlaw scholars have taken up end-to-end as a battle cry for Internet freedom, invoking it to buttress arguments about the ideological impropriety of filtering Internet traffic or favoring some types or sources of traffic over others.

These arguments are powerful, and end-to-end neutrality in both its technical and political incarnations has been a crucial touchstone for Internet development. But it has its limits. End-to-end does not fully capture the overall project of maintaining openness to contribution from unexpected and unaccredited sources. [...]

According to end-to-end theory, placing control and intelligence at the edges of a network maximizes not just network flexibility, but also user choice. The political implication of this view that end- to-end design preserves users’ freedom, because the users can configure their own machines however they like depends on an increasingly unreliable assumption: whoever runs a machine at a given network endpoint can readily choose how the machine will work. To see this presumption in action, consider that in response to a network teeming with viruses and spam, network engineers recommend more bandwidth (so the transmission of “deadweights” like viruses and spam does not slow down the much smaller proportion of legitimate mail being carried by the network) and better protection at user endpoints, rather than interventions by ISPs closer to the middle of the network. But users are not well positioned to painstakingly maintain their machines against attack, leading them to prefer locked-down PCs [or cloud computing], which carry far worse, if different, problems. Those who favor end-to-end principles because an open network enables generativity should realize that intentional inaction at the network level may be self- defeating, because consumers may demand locked-down endpoint environments that promise security and stability with minimum user upkeep. This is a problem for the power user and consumer alike.

The answer of end-to-end theory to threats to our endpoints is to have them be more discerning, transforming them into digital gated communities that must frisk traffic arriving from the outside. The frisking is accomplished either by letting next to nothing through -- as is the case with highly controlled information appliances -- or by having third-party antivirus firms perform monitoring, as is done with increasingly locked-down PCs. Gated communities offer a modicum of safety and stability to residents as well as a manager to complain to when something goes wrong. But from a generative standpoint, these moated paradises can become prisons. Their confinement is less than obvious, because what they block is not escape but generative possibility: the ability of outsiders to offer code and services to users, and the corresponding opportunity of users and producers to influence the future without a regulator’s permission. When endpoints are locked down, and producers are unable to deliver innovative products directly to users, openness in the middle of the network becomes meaningless. Open highways do not mean freedom when they are so dangerous that one never ventures from the house.

Some may cling to a categorical end-to-end approach; doubtlessly, even in a world of locked-down PCs there will remain old-fashioned generative PCs for professional technical audiences to use. But this view is too narrow. We ought to see the possibilities and benefits of PC generativity made available to everyone, including the millions of people who give no thought to future uses when they obtain PCs, and end up delighted at the new uses to which they can put their machines. And without this ready market, those professional developers would have far more obstacles to reaching critical mass with their creations.

Strict loyalty to end-to-end neutrality should give way to a new generativity principle, a rule that asks that any modifications to the Internet’s design or to the behavior of ISPs be made where they will do the least harm to generative possibilities. Under such a principle, for example, it may be preferable in the medium term to screen out viruses through ISP-operated network gateways rather than through constantly updated PCs. Although such network screening theoretically opens the door to additional filtering that may be undesirable, this speculative risk should be balanced against the very real threats to generativity inherent in PCs operated as services rather than products. Moreover, if the endpoints remain free as the network becomes slightly more ordered, they remain as safety valves should network filtering begin to block more than bad code.

In the meantime, ISPs are in a good position to help in a way that falls short of undesirable perfect enforcement, and that provides a stopgap while we develop the kinds of community-based tools that can facilitate salutary endpoint screening. There are said to be tens of thousands of PCs converted to zombies daily, and an ISP can sometimes readily detect the digital behavior of a zombie when it starts sending thousands of spam messages or rapidly probes a sequence of Internet addresses looking for yet more vulnerable PCs. Yet ISPs currently have little incentive to deal with this problem. To do so creates a two-stage customer service nightmare. If the ISP quarantines an infected machine until it has been recovered from zombie-hood cutting it off from the network in the process the user might claim that she is not getting the network access she paid for. And quarantined users will have to be instructed how to clean their machines, which is a complicated business. This explains why ISPs generally do not care to act when they learn that they host badware-infected Web sites or consumer PCs that are part of a botnet.

Whether through new industry best practices or through a rearrangement of liability motivating ISPs to take action in particularly flagrant and egregious zombie situations, we can buy another measure of time in the continuing security game of cat and mouse. Security in a generative system is something never fully put to rest it is not as if the “right” design will forestall security problems forevermore. The only way for such a design to be foolproof is for it to be nongenerative, locking down a computer the same way that a bank would fully secure a vault by neither letting any customers in nor letting any money out. Security of a generative system requires the continuing ingenuity of a few experts who want it to work well, and the broader participation of others with the goodwill to outweigh the actions of a minority determined to abuse it.

A generativity principle suggests additional ways in which we might redraw the map of cyberspace. First, we must bridge the divide between those concerned with network connectivity and protocols and those concerned with PC design -- a divide that end-to-end neutrality unfortunately encourages. Such modularity in stakeholder competence and purview was originally a useful and natural extension of the Internet’s architecture. It meant that network experts did not have to be PC experts, and vice versa. But this division of responsibilities, which works so well for technical design, is crippling our ability to think through the trajectory of applied information technology. Now that the PC and the Internet are so inextricably intertwined, it is not enough for network engineers to worry only about network openness and assume that the endpoints can take care of themselves. It is abundantly clear that many endpoints cannot. The procrastination principle has its limits: once a problem has materialized, the question is how best to deal with it, with options ranging from further procrastination to effecting changes in the way the network or the endpoints behave. Changes to the network should not be categorically off the table.

Second, we need to rethink our vision of the network itself. “Middle” and “endpoint” are no longer subtle enough to capture the important emerging features of the Internet/PC landscape. It remains correct that, from a network standpoint, protocol designs and the ISPs that implement them are the “middle” of the network, as distinct from PCs that are “endpoints.” But the true import of this vernacular of “middle” and “endpoint” for policy purposes has lost its usefulness in a climate in which computing environments are becoming services, either because individuals no longer have the power to exercise meaningful control over their PC endpoints, or because their computing activities are hosted elsewhere on the network, thanks to “Web services.” By ceding decision-making control to government, to a Web 2.0 service, to a corporate authority such as an OS maker, or to a handful of security vendors, individuals permit their PCs to be driven by an entity in the middle of the network, causing their identities as endpoints to diminish. The resulting picture is one in which there is no longer such a clean separation between “middle” and “endpoint.” In some places, the labels have begun to reverse.

Abandoning the end-to-end debate’s divide between “middle” and “endpoint” will enable us to better identify and respond to threats to the Internet’s generativity. In the first instance, this might mean asking that ISPs play a real role in halting the spread of viruses and the remote use of hijacked machines.


Jonathan Zittrain
Professor of Law
Harvard Law School | Harvard Kennedy School of Government
Co-Founder, Berkman Center for Internet & Society
<http://cyber.law.harvard.edu>

Archives





-------------------------------------------
Archives: https://www.listbox.com/member/archive/247/=now
RSS Feed: https://www.listbox.com/member/archive/rss/247/
Powered by Listbox: http://www.listbox.com

Current thread: