Interesting People mailing list archives

IP: Why Computers are Insecure from RISKS


From: Dave Farber <farber () cis upenn edu>
Date: Wed, 08 Dec 1999 21:21:27 -0500



Date: Mon, 29 Nov 1999 10:30:20 -0600
From: Bruce Schneier <schneier () counterpane com>
Subject: Why Computers are Insecure

Almost every week the computer press covers another security flaw: a virus
that exploits Microsoft Office, a vulnerability in Windows or UNIX, a Java
problem, a security hole in a major Web site, an attack against a popular
firewall. Why can't vendors get this right, we wonder? When will it get
better?
I don't believe it ever will. Here's why:
Security engineering is different from any other type of engineering. Most
products, such as word processors or cellular phones, are useful for what
they do. Security products, or security features within products, are
useful precisely because of what they don't allow to be done. Most
engineering involves making things work. Think of the original definition
of a hacker: someone who figured things out and made something cool
happen. Security engineering involves making things not happen. It
involves figuring out how things fail, and then preventing those failures.
In many ways this is similar to safety engineering. Safety is another
engineering requirement that isn't simply a "feature." But safety
engineering involves making sure things do not fail in the presence of
random faults: it's about programming Murphy's computer, if you will.
Security engineering involves making sure things do not fail in the
presence of an intelligent and malicious adversary who forces faults at
precisely the worst time and in precisely the worst way. Security
engineering involves programming Satan's computer.
And Satan's computer is hard to test.
Virtually all software is developed using a "try-and-fix" methodology.
Small pieces are implemented, tested, fixed, and tested again. Several of
these small pieces are combined into a module, and this module is then
tested, fixed, and tested again. Small modules are then combined into
larger modules, and so on. The end result is software that more or less
functions as expected, although in complex systems bugs always slip through.
This try-and-fix methodology just doesn't work for testing security. No
amount of functional testing can ever uncover a security flaw, so the
testing process won't catch anything. Remember that security has nothing
to do with functionality. If you have an encrypted phone, you can test
it. You can make and receive calls. You can try, and fail, to
eavesdrop. But you have no idea if the phone is secure or not.
The only reasonable way to "test" security is to perform security reviews.
This is an expensive, time-consuming, manual process. It's not enough to
look at the security protocols and the encryption algorithms. A review
must cover specification, design, implementation, source code, operations,
and so forth. And just as functional testing cannot prove the absence of
bugs, a security review cannot show that the product is in fact secure.
It gets worse. A security review of version 1.0 says little about the
security of version 1.1. A security review of a software product in
isolation does not necessarily apply to the same product in an operational
environment. And the more complex the system is, the harder a security
evaluation becomes and the more security bugs there will be.
Suppose a software product is developed without any functional testing at
all. No alpha or beta testing. Write the code, compile it, and ship. The
odds of this program working at all -- let alone being bug-free -- are
zero. As the complexity of the product increases, so will the number of
bugs. Everyone knows testing is essential.
Unfortunately, this is the current state of practice in security. Products
are being shipped without any, or with minimal, security testing. I am not
surprised that security bugs show up again and again. I can't believe
anyone expects otherwise.
Even worse, products are getting more complex every year: larger operating
systems, more features, more interactions between different programs on the
Internet. Windows NT has been around for a few years, and security bugs
are still being discovered. Expect many times more bugs in Windows 2000;
the code is significantly larger. Expect the same thing to hold true for
every other piece of software.
This won't change. Computer usage, the Internet, convergence, are all
happening at an ever-increasing pace. Systems are getting more complex,
and necessarily more insecure, faster than we can fix them -- and faster
than we can learn how to fix them.
Acknowledgements: The phrase "programming Satan's computer" was
originally Ross Anderson's. It's just too good not to use, though. A
shortened version of this essay originally appeared in the November 15
issue of _Computerworld_, and also in the November Crypto-Gram.
Bruce Schneier, CTO, Counterpane Internet Security, Inc. Ph: 612-823-1098
3031 Tisch Way, 100 Plaza East, San Jose, CA 95128 Fax: 612-823-1590
Free Internet security newsletter. See: http://www.counterpane.com


Current thread: