Interesting People mailing list archives

IP: Re: Computer Security: Will We Ever Learn? Risks Digest 20.90


From: Dave Farber <farber () cis upenn edu>
Date: Tue, 06 Jun 2000 10:27:58 -0700



From: "Jonathan S. Shapiro" <shap () eros-os org>
To: <farber () cis upenn edu>

A couple of responses to Bruce Schneier's comments.

Security cannot be beta tested...

This is not entirely true. Flaws like buffer overruns can be tested for or
designed out. Better operating systems and languages would be a big help.
There remain significant errors in how software is deployed, and I agree
that these are hard to test for.

On the other hand, I'm not aware of much research on how to design systems
to naturally avoid these problems. [If you know of some, please do let me
know!] Deployment errors can be seen as a problem in usability, and we
actually have a lot of techniques for studying usability. Unfortunately,
many computing researchers view usability work as somehow "wishy washy" or
second class research. We need to change this.

Today, over a decade after Morris and about 35 years after these attacks
were first discovered, you'd think the security community would have
solved
the problem of security vulnerabilities based on buffer overflows.

As Bruce says, it has long been recognized that these flaws are a problem.
We have had programming languages for decades that do not suffer from these
errors. I think that it's a bit unfair to blame the security community for
the fact that customers don't adopt new languages easily, and I suspect that
Bruce didn't mean to do so. You can only lead the horse to water.

Continuous process, while important, is no silver bullet either. First,
people have a way of expecting miracles. They want something secure, but
they also want it to be higher performance and ever more feature rich. Using
current languages, the very best programmers introduce one security flaw per
thousand lines of code. More features equals more security holes. It's just
that simple.

Second, on those rare occasions when the research community as a whole has
produced technology that actually *delivers* such a miracle, the customers
gripe about how expensive it is to convert, and couldn't we please produce
something that is perfectly compatible with the old stuff but works better?
The answer is yes and no. Yes we can make it better. No we cannot ultimately
build high-confidence systems without changing our tools in some fundamental
ways. It is not unreasonable that customers balk at the prospect of the
billions of dollars of cost that this sort of change will entail.

So while I agree with Bruce that there can be no security without continuing
process, I would also say that there can be no successful process without
supporting architecture and design, and that there cannot be supporting
design without some very serious efforts in the area of usability. If the
customer insists on bug for bug compatibility, patching the holes as they
are discovered is the best that can realistically be done. Today's commodity
operating system and language technologies do not provide an architecture
that can solve this. Until this changes, security will continue to be a
process of applying band-aids to sucking chest wounds.


Finally, I want to address the point of software liability. As a personal
matter, I'm inclined to agree that software liability would be a good thing.
This is somewhat self-serving. I have some operating system technology that
I think can survive in that world. Microsoft, in my opinion, does not.

It is useful to ask: "If liability and compensation are so important, why
won't customers pay for it?" That is, if I created a company and licensed my
software in a way that protected the customer, would the customer pay more
for this? In a few areas the answer is clearly yes, but in general I'm not
convinced. The bottom line is that *you* don't buy operating systems or word
processors on the basis of their liability guarantees (neither do I)

On the other hand, I think that software liability will definitely happen.
Sooner or later, after your machine is used to launch yet another virus at
their machine, your neighbor will successfully go to court and argue that
your decision to run Windows (or MacOS, or UNIX), which you know is
insecure, constitutes willful contributory negligence. All of a sudden
liability will matter to you personally.

Until then, software developers will continue to object that "engineering"
is impossible in software (which is simply not true), and they will continue
to sell houses with leaky roofs, bad electrical systems, occasional gas
leaks, broken windows, door latches rather than locks, and varmint
infestations.

And most of us will continue to buy them.


Jonathan S. Shapiro
The EROS Group, LLC


Current thread: