Interesting People mailing list archives

IP: Computer Security: Will We Ever Learn? Risks Digest 20.90


From: Dave Farber <farber () cis upenn edu>
Date: Tue, 06 Jun 2000 01:24:55 -0700



Date: Mon, 15 May 2000 15:06:31 -0500
From: Bruce Schneier <schneier () counterpane com>
Subject: Computer Security: Will We Ever Learn? (CRYPTO-GRAM, May 15, 2000)

  [From CRYPTO-GRAM, May 15, 2000, in RISKS with permission, by Bruce
  Schneier, Counterpane Internet Security, Inc. <schneier () counterpane com>
  See Bruce's free Internet security newsletter: http://www.counterpane.com]

     Computer Security: Will We Ever Learn?

If we've learned anything from the past couple of years, it's that computer
security flaws are inevitable.  Systems break, vulnerabilities are reported
in the press, and still many people put their faith in the next product, or
the next upgrade, or the next patch.  "This time it's secure," they
say.  So far, it hasn't been.

Security is a process, not a product.  Products provide some protection,
but the only way to effectively do business in an insecure world is to put
processes in place that recognize the inherent insecurity in the
products.  The trick is to reduce your risk of exposure regardless of the
products or patches.

Consider denial-of-service attacks.  DoS attacks are some of the oldest and
easiest attacks in the book.  Even so, in February 2000, coordinated,
distributed DoS attacks easily brought down several high-traffic Web sites,
including Yahoo, eBay, Amazon.com and CNN.

Consider buffer overflow attacks.  They were first talked about as early as
the 1960s -- time-sharing systems suffered from the problem -- and were
known by the security literati even earlier than that.  In the 1970s, they
were often used as a point of attack against early networked computers.  In
1988, the Morris Worm exploited a buffer overflow in the Unix fingerd
daemon: a very public use of this type of attack.

Today, over a decade after Morris and about 35 years after these attacks
were first discovered, you'd think the security community would have solved
the problem of security vulnerabilities based on buffer overflows.  Think
again.  Over two-thirds of all CERT advisories in 1998 were for
vulnerabilities caused by buffer overflows.  During an average week in
1999, buffer overflow vulnerabilities were found in the RSAREF
cryptographic toolkit (oops), HP's operating system, the Solaris operating
system, Microsoft IIS 4.0 and Site Server 3.0, Windows NT, and Internet
Explorer.  A recent study named buffer overflows as the most common
security problem.

Consider encryption algorithms.  Proprietary secret algorithms are
regularly published and broken.  Again and again, the marketplace learns
that proprietary secret algorithms are a bad idea.  But companies and
industries -- like Microsoft, the DVD consortium, cellular phone providers,
and so on -- continue to choose proprietary algorithms over public, free
alternatives.

Is Anyone Paying Attention?

Sadly, the answer to this question is: not really.  Or at least, there are
far fewer people paying attention than should be.  And the enormous need
for digital security products necessitates people to design, develop and
implement them.  The resultant dearth of experts means that the percentage
of people paying attention will get even smaller.

Most products that use security are not designed by anyone with security
expertise.  Even security products are generally designed and implemented
by people who have only limited security expertise.  Security cannot be
functionality tested -- no amount of beta testing will uncover security
flaws -- so the flaws end up in fielded products.

I'm constantly amazed by the kinds of things that break security
products.  I've seen a file encryption product with a user interface that
accidentally saves the key in the clear.  I've seen VPNs where the
telephone configuration file accidentally allows a random person to
authenticate himself to the server, or that allows one remote client to
view the files of another remote client.  There are a zillion ways to make
a product insecure, and manufacturers manage to stumble on a lot of those
ways again and again.

No one is paying attention because no one has to.

Computer security products, like software in general, have a very odd
product quality model.  It's unlike an automobile, a skyscraper, or a box
of fried chicken.  If you buy a product, and get harmed because of a
manufacturer's defect, you can sue...and you'll win.  Car-makers can't get
away with building cars that explode on impact; chicken shops can't get
away with selling buckets of fried chicken with the odd rat mixed in.  It
just wouldn't do for building contractors to say thing like,
"Whoops.  There goes another one.  Sorry.  But just wait for Skyscraper
1.1; it'll be 100% collapse-free!"

Software is different.  It is sold without any claims whatsoever.  Your
accounts receivable database can crash, taking your company down with it,
and you have no claim against the software company.  Your word processor
can accidentally corrupt your files and you have no recourse.  Your
firewall can turn out to be completely ineffectual -- hardly better than
having nothing at all -- and yet it's your fault.  Microsoft fielded
Hotmail with a bug that allowed anyone to read the accounts of 40 or so
million subscribers, password or no password, and never bothered to apologize.

Software manufacturers don't have to produce a quality product because
there is no liability if they don't.  And the effect of this for security
products is that manufacturers don't have to produce products that are
actually secure, because no one can sue them if they make a bunch of false
claims of security.

The upshot of this is that the marketplace does not reward real
security.  Real security is harder, slower, and more expensive, both to
design and to implement.  Since the buying public has no way to
differentiate real security from bad security, the way to win in this
marketplace is to design software that is as insecure as you can possibly
get away with.

Microsoft knows that reliable software is not cost effective.  According to
studies, 90% to 95% of all bugs are harmless.  They're never discovered by
users, and they don't affect performance.  It's much cheaper to release
buggy software and fix the 5% to 10% of bugs people find and complain about.

Microsoft also knows that real security is not cost-effective.  They get
whacked with a new security vulnerability several times a week.  They fix
the ones they can, write misleading press releases about the ones they
can't, and wait for the press fervor to die down (which it always
does).  And six months later they issue the next software version with new
features and all sorts of new insecurities, because users prefer cool
features to security.

The only solution is to look for security processes.

There's no such thing as perfect security.  Interestingly enough, that's
not necessarily a problem.  In the U.S. alone, the credit card industry
loses $10 billion to fraud per year; neither Visa nor MasterCard is showing
any sign of going out of business.  Shoplifting estimates in the U.S. are
currently between $9.5 billion and $11 billion per year, but you never see
"shrinkage" (as it is called) cited as the cause when a store goes out of
business.  Recently, I needed to notarize a document.  That is about the
stupidest security protocol I've ever seen.  Still, it works fine for what
it is.

Security does not have to be perfect, but the risks have to be
manageable.  The credit card industry understands this.  They know how to
estimate the losses due to fraud.  Their problem is that losses from phone
credit card transactions are about five times the losses from face-to-face
transactions (when the card is present).  Losses from Internet transactions
are many times those of phone transactions, and are the driving force
behind SET.

My primary fear about cyberspace is that people don't understand the risks,
and they are putting too much faith in technology's ability to obviate
them.  Products alone cannot solve security problems.

The digital security industry is in desperate need of a perceptual
shift.  Countermeasures are sold as ways to counter threats.  Good
encryption is sold as a way to prevent eavesdropping.  A good firewall is a
way to prevent network attacks.  PKI is sold as trust management, so you
can avoid mistakenly trusting people you really don't.  And so on.

This type of thinking is completely backward.  Security is old, older than
computers.  And the old-guard security industry thinks of countermeasures
not as ways to counter threats, but as ways to avoid risk.  This
distinction is enormous.  Avoiding threats is black and white: either you
avoid the threat, or you don't.  Avoiding risk is continuous: there is some
amount of risk you can accept, and some amount you can't.

Security processes are how you avoid risk.  Just as businesses use the
processes of double-entry bookkeeping, internal audits, and external audits
to secure their financials, businesses need to use a series of security
processes to protect their networks.

Security processes are not a replacement for products; they're a way of
using security products effectively.  They can help mitigate the
risks.  Network security products will have flaws; processes are necessary
to catch attackers exploiting those flaws, and to fix the flaws once they
become public.  Insider attacks will occur; processes are necessary to
detect the attacks, repair the damages, and prosecute the attackers.  Large
systemwide flaws will compromise entire products and services (think
digital cell phones, Microsoft Windows NT password protocols, or DVD);
processes are necessary to recover from the compromise and stay in business.

Here are two examples of how to focus on process in enterprise network
security:

1.  Watch for known vulnerabilities.  Most successful network-security
attacks target known vulnerabilities for which patches already
exist.  Why?  Because network administrators either didn't install the
patches, or because users reinstalled the vulnerable systems.  It's easy to
be smart about the former, but just as important to be vigilant about the
latter.  There are many ways to check for known vulnerabilities.  Network
vulnerability scanners like Netect and SATAN test for them.  Phone scanners
like PhoneSweep check for rogue modems inside your corporation.  Other
scanners look for Web site vulnerabilities.  Use these sorts of products
regularly, and pay attention to the results.

2.  Continuously monitor your network products.  Almost everything on your
network produces a continuous stream of audit information: firewalls,
intrusion detection systems, routers, servers, printers, etc.  Most of it
is irrelevant, but some of it contains footprints from successful
attacks.  Watching it all is vital for security, because an attack that
bypassed one product might be picked up by another.  For example, an
attacker might exploit a flaw in a firewall and bypass an IDS, but his
attempts to get root access on an internal server will appear in that
server's audit logs.  If you have a process in place to watch those logs,
you'll catch the intrusion in progress.

In this newsletter and elsewhere I have written pessimistically about the
future of computer security.  The future of computers is complexity, and
complexity is anathema to security.  The only reasonable thing to do is to
reduce your risk as much as possible.  We can't avoid threats, but we can
reduce risk.

Nowhere else in society do we put so much faith in technology.  No one has
ever said, "This door lock is so effective that we don't need police
protection, or breaking-and-entering laws."  Products work to a certain
extent, but you need processes in place to leverage their effectiveness.

Copyright (c) 2000 by Counterpane Internet Security, Inc.
Bruce Schneier, CTO, Counterpane Internet Security, Inc.  Ph: 408-556-2401
3031 Tisch Way, 100 Plaza East, San Jose, CA 95128       Fax: 408-556-0889

  [A version of this essay originally appeared in the April issue of
  *Information Security* magazine.
    <http://www.infosecuritymag.com/apr2000/cryptorhythms.htm>]


Current thread: