Secure Coding mailing list archives

Microsoft's message at RSA


From: gunnar at arctecgroup.net (Gunnar Peterson)
Date: Fri, 09 May 2008 21:13:18 -0500

Hi Andy,

Great post. I especially like the part about making choices. Having 
users type passwords into websites that "protect" all their assets 
pretty clearly isn't working. Cardspace is pretty clearly a massive 
improvement. That said, I don't think the choice is between perfect 
liberty and perfect security, but more what Dan Geer suggested:

"We digerati have given the world fast, free, open transmission to 
anyone from anyone, and we've handed them a general-purpose device with 
so many layers of complexity that there is no one who understands it 
all. Because ?you're on your own? won't fly politically, something has 
to change. Since you don't have to block transmission in order to 
surveil it, and since general-purpose capabilities in computers are lost 
on the vast majority of those who use them, the beneficiaries of 
protection will likely consider surveillance and appliances to be an 
improvement over risk and complexity. From where they sit, this is true 
and normal.

While the readers of Queue may well appreciate that driving is much more 
real with a centrifugal advance and a stick shift, try and sell that to 
the mass market. The general-purpose computer must die or we must put 
everything under surveillance. Either option is ugly, but ?all of the 
above? would be lights-out for people like me, people like you, people 
like us. We're playing for keeps now."

http://www.acmqueue.org/modules.php?name=Content&pa=showpage&pid=436

I hope that cheers everyone up.

-gp

Andy Steingruebl wrote:
On Fri, May 9, 2008 at 3:42 PM, Gary McGraw <gem at cigital.com> wrote:
Hi andy (and everybody),

Indeed.  I vote for personal computer liberty over guaranteed iron clad security any day.  For amusing and shocking 
rants on this subject google up some classic Ross Anderson.  Or heck, I'll do it for you:
http://www.cl.cam.ac.uk/~rja14/tcpa-faq.html

I've heard this point for years, and yet when we actually look at ways
of solving the consistent problems of software security, we always
come back to tamper-proof/restricted-rights as a pretty reasonable
starting point.

I don't know whether this mailing list is really the place for me to
advocate about this, but every time we get into a situation where we
talk  about high reliability (electronic voting for example) people
are all up in arms that we haven't followed pretty strict practices to
make sure  the machines don't get hacked, aren't hackable by even
experts, etc. hardened hardware, trusted computing bases, etc.

But, if you want to try and apply the same engineering principles to
protecting an individual's assets such as their home computer, bank
account credentials, etc. then you're trampling on their freedom.

I don't really see how we can viably have both.  Sure we're looking at
all sorts of things like sandboxing and whatnot, but given
multi-purpose computing and the conflicting goals of absolute freedom
and defense against highly motivated attackers, we're going to have to
make some choices aren't we?

I don't disagree that all of these technologies can be misused.  Most
can.  We've all read the Risks columns for years about ways to screw
things up.

At the same time individual computers don't exist in isolation.  They
are generally part of an ecosystem (the internet) and as such your
polluting car causes my acid rain and lung cancer.  Strict liability
isn't the right solution to this sort of public policy problem,
regulation is.  That regulation and control can take many forms, some
good, some bad.

I don't see the problem getting fixed though without some substantial
reworking of the ecosystem.  Some degree of freedom may well be a
casualty.

Please don't think I'm actually supporting the general decrease in
liberty overall.  At the same time I'm pretty sure that traffic laws
are a good idea, speed limits are a good idea, even though they
restrict individual freedoms.    In the computing space I'm ok
allowing people to opt-out but only if in doing to they don't pose a
manifest danger to others.  Balancing the freedom vs. the restriction
isn't easy of course, and I'm not suggesting it is.  I'm merely
suggesting that all of the research we've ever done in the area
doesn't point to our current model (relying on users to make choices
about what software to use) promising.

How to make this happen without it turning into a debacle is of course
the tricky part.



Current thread: