Secure Coding mailing list archives

RE: Re: White paper: "Many Eyes" - No Assurance Against Ma ny Spies


From: Jeremy Epstein <jeremy.epstein () webmethods com>
Date: Mon, 03 May 2004 20:00:43 +0100

Crispin said:
But taking the remark seriously, it says that you must not trust 
anything that  you don't have source code for. The point of 
Thompson's 
paper is that this includes the compiler; having the source 
code for the 
applications and the OS is not enough, and even having the source for 
the compiler is not enough unless you bootstrap it yourself.

Ken Thompson's point is that the *source* is not enough... maybe you mean to
say "it says that you must not trust anything that you don't fully
understand".

Extrapolating from Thompson's point, the same can be said for 
silicon: 
how do we know that CPUs, chipsets, drive controllers, etc. 
don't have 
Trojan's in them? Just how hard would it be to insert a funny 
hook in an 
IDE drive that did something "interesting" when the right 
block sequence 
comes by.

In fact, there have been cases where the software was correct, but the
hardware was implemented incorrectly in a way that subverted the software
protections.  I found a bug 20 years ago in a microprogrammed disk
controller that under particular circumstances would cause it to write one
sector more than was specified... once can certainly hypothesize something
like that being deliberate (it was, I believe, accidental).  And I was told
about a particular Orange Book A1-targeted system where they discovered
during development that a hardware bug led all of their "provably secure"
code to be totally insecure.

[...]

The horrible lesson from all this is that you cannot trust 
anything you 
do not control. And since you cannot build everything yourself, you 
cannot really trust anything. And thus you end up taking calculated 
guesses as to what you trust without verification. Reputation 
becomes a 
critical factor.

Again, this is almost but not quite correct... it's not just "anything you
do not control", but "anything you do not fully understand and control".
It's not good enough to have the source code and the chip masks, if you
don't also understand what the source code and chip masks mean, and have
confidence through your personal electron microscope that you also built
that the chip you're running is an implementation of the mask, etc.  And at
some level you have to trust the atomic physicists who explain how molecules
& atoms interact to cause the stuff on the chip to work.

It also leads to the classic security analysis technique of amassing 
*all* the threats against your system, estimating the probability and 
severity of each threat, and putting most of your resources 
against the 
largest threats. IMHO if you do that, then you discover that 
"Trojans in 
the Linux code base" is a relatively minor threat compared to "crappy 
user passwords", "0-day buffer overflows", and "lousy 
Perl/PHP CGIs on 
the web server". This Ken Thompson gedanken experiment is fun for 
security theorists, but is of little practical consequence to 
most users.

Absolutely true.  This is one of the great frustrations to me... I get
complaints from customers on occasion that they don't like one security
feature or another, and that it absolutely *must* be changed to meet their
security requirements... all the while not changing the out-of-the-box
administrator password.  We need to spend more time learning about risk
assessment, and teaching how to use it effectively to get "good enough"
security.

--Jeremy






Current thread: