Interesting People mailing list archives

the meme of "responsible disclosure" MIT student support letter


From: David Farber <dave () farber net>
Date: Sat, 16 Aug 2008 13:14:39 -0400



Begin forwarded message:

From: Matt Blaze <blaze () cis upenn edu>
Date: August 16, 2008 12:55:19 PM EDT
To: David Wagner <daw () cs berkeley edu>
Cc: smb () cs columbia edu (Steven M. Bellovin), cindy () eff org (Cindy Cohn), marcia () eff org (Marcia Hofmann), jennifer () eff org (Jennifer Granick), rebecca () eff org (Rebecca Jeschke), dave () farber net (Dave Dave Farber), lorrie () cs cmu edu (Lorrie Cranor), dwallach () cs rice edu (Dan Wallach), Dave_Touretzky () cs cmu edu, tkohno () cs ucsd edu (Yoshi Kohno), schneier () schneier com (Bruce Schneier), mcdaniel () cse psu edu (Patrick McDaniel), savage () cs ucsd edu (Stefan Savage), kurt () eff org (Kurt Opsahl)
Subject: Re: link to final MIT student support letter


On Aug 16, 2008, at 1:27, David Wagner wrote:

FYI, Nature online has an article on the MTBA controversy:

http://www.nature.com/news/2008/080815/full/news.2008.1044.html

(I'm quoted, as is Ross Anderson and others.  However, I'm not
entirely happy with their coverage of the responsible disclosure
issue: it seems to conflate what researchers consider good manners
with what is, or ought to be, required as a matter of law.)

-- David

Dave,

Thanks for forwarding the link.  I'm also very uncomfortable with this
meme of "responsible disclosure" as anything but a form of polite
behavior.  Of course disclosure should be responsible -- but so should
everything else done in life by everyone.  The problem is that the
responsible disclosure "agenda" imputes a radically narrow definition
of "responsible" onto security research that greatly oversimplifies its
purpose.  In particular it seems to consider the researchers, the
vendors, and the customers of the particular system involved to be the
only legitimate stakeholders when a problem is found.

But, as we here all understand, security research can have much broader
impact than just finding and fixing (or exploiting) flaws in individual
systems, and can can affect a much larger constituency than just the people
directly involved with the systems that were studied.  This is true even
for ostensibly narrow case-study research (like the MIT student's Charlie study), whose focus is on flaws in a particular system. The value may not be obvious to nonspecialists, but it's very important, and as researchers we're in a unique position to explain and defend this (and if we don't, no
one else likely will).

One thing that may not be obvious outside our community (but that's critical to understanding why we do what we do) is that problems are rarely narrowly confined to one system; the same kinds of bugs or engineering deficiencies that allow one system to be attacked likely occur in others as well. The
lessons from case studies are thus often more much widely applicable  --
and are more generally valuable to the community at large -- than it might
superficially seem.   Seeing the problems in, say, the Charlie Card
may help someone fix or avoid similar flaws in other systems, some of which may be more safety-critical or high stakes than a transit fare collection
system. That's a big part of why we publish in the first place.

Also, the tools and techniques used to analyze or attack one system can
generally be adapted much more generally. For example, I leaned something
about black box reverse engineering techniques and stored value system
practices from reading the MIT student's published slides, and this will
be valuable to me even though I have no particular interest in the MBTA's
system.

These things are especially important in light of another thing that the
public and nonspecialists may not understand, which is that engineering
secure systems is an open research problem. We're still trying to figure
out the basic principles of secure system design and engineering, and
studies of how systems fail are hugely important. I think people outside
our field don't intuitively grasp this point, or its magnitude.  Related
to this is the fact that techniques for attacking and techniques for
defending really are two inseparable sides of the same coin.  One can't
learn or understand one without the other.

There also are other stakeholders not considered by responsible disclosure (e.g., the public, especially when a government system is involved), but I'm
thinking here particularly about the research community.

Yes, researchers should avoid deliberately helping criminals, but
"responsible disclosure" so oversimplifies the issues that it's all
but useless as an ethical standard and very dangerous as a legal one.

I grappled with this a bit in a declaration I wrote in the Felton DMCA case
way back when; see:
   http://www.crypto.com/papers/mab-feltendecl.txt

Maybe I'll blog about this in the next few days, if folks thing that
would be useful.

-matt





-------------------------------------------
Archives: https://www.listbox.com/member/archive/247/=now
RSS Feed: https://www.listbox.com/member/archive/rss/247/
Powered by Listbox: http://www.listbox.com


Current thread: