Information Security News mailing list archives

Are we ready for a cyber-UL?


From: InfoSec News <isn () C4I ORG>
Date: Thu, 4 Jan 2001 22:54:43 -0600

http://www.zdnet.com/zdnn/stories/comment/0,5859,2669708,00.html

By Bruce Schneier, CTO, Counterpane Internet Security Inc.,
Special to ZDNet
January 2, 2001 4:05 PM PT

Underwriters Laboratories (UL) is an independent testing organization
that rates electrical equipment, safes, and a whole lot of other
things. It all started in 1893, when William Henry Merrill was called
in to find out why the Palace of Electricity at the Columbian
Exposition in Chicago kept catching on fire (not the best way to tout
the wonders of electricity).

After making the exhibit safe, Merrill realized he had a business
model on his hands.

He approached insurance underwriters with the idea of an independent
testing lab. They were all sick of paying for electricity fires, and
took him up on the deal. Eventually, if your electrical equipment
wasn't UL-certified, you couldn't get insurance.

Real-world testing Today, Underwriters Laboratories rates all kinds of
equipment, not just electrical. Safes, for example, are rated based on
time and materials. A "TL-15" rating means that the safe is secure
against a burglar limited to safecracking tools (and not torches or
explosives) and 15 minutes' working time.

Other ratings certify the safe for longer periods of time or against
burglars with blowtorches and explosives. These ratings are not
theoretical; actual hotshot safecrackers, employed by UL, take actual
safes and test them. If a company comes out with a new version of the
safe, he has to get it retested -- the rating does not carry forward.

Applying this sort of thinking to computer networks -- firewalls,
operating systems, Web servers -- is a natural idea. And the newly
formed Center for Internet Security plans to implement it. I'll talk
about the general idea first and then the specifics.

A moving target I don't believe that this is a good idea, certainly
not now and possibly not ever. First, network security is too much of
a moving target. Safes are easy. Safecracking tools don't change much.
Maybe someone invents a hotter torch, or someone else invents a more
sensitive microphone. But most of the time, techniques of safecracking
remain constant.

Not so with the Internet. There are always new vulnerabilities, new
attacks, new countermeasures. There are a couple of dozen new
vulnerabilities each week in major software products; any rating is
likely to become obsolete within months, if not weeks.

Second, network security is much too hard to test. Again, safes are
easy. Breaking into them requires skill but is reasonably
straightforward. Modern software is obscenely complex: There's an
enormous number of features, configurations, implementations. And then
there are interactions between different products, different vendors,
and different networks.

In the past, I've written extensively about complexity and the
impossibility of testing security. For now, suffice it to say that
testing any reasonably sized software product would cost millions of
dollars and wouldn't guarantee anything at the end. And worse, if you
updated the product you'd have to test it all over again.

Recalibrating ratings Third, I'm not sure how to make security ratings
meaningful. Intuitively, I know what it means to have a safe rated at
30 minutes and another rated at an hour. But computer attacks don't
take time in the same way that safecracking does.

The Center for Internet Security talks about a rating from 1 to 10.
What does a 9 mean? What does a 3 mean? How can ratings be anything
other than binary: Either there is a vulnerability or there isn't?

The moving-target problem particularly exacerbates this issue. Imagine
a server with a 10 rating; there are no known weaknesses. Someone
publishes a vulnerability that allows an attacker to break in.

What is the server's rating now? Nine? One? How are users notified of
this change? Is the manufacturer required to change his official
rating on his Web site? On his packaging? How does the Center re-rate
the server once it is updated? But then the rating only affects
certain patch levels of the product; how do you explain that? And once
you've solved that, how do you deal with vulnerabilities that only
affect the product in some configurations?

Fourth, failures in network security are not always obvious. If a safe
is broken into, the owner learns about it when he next opens his safe.
If a network is broken into, the owner might never know. Data isn't
stolen in the same way as diamonds or cash: It is copied, it is
modified, or it is just examined.

Remember that Microsoft Corp.'s network was compromised for weeks
before anyone knew about it. I believe that most network intrusions
are never even noticed. A "secure" network product might fail
completely, and no one would be the wiser.

Fifth, I don't see how a rating could take context into account. Safes
are just as hard to crack in a bank as they are in a house; network
security products are highly dependent on their environment.

A product rating cannot take into account the environment and
interactions that a component will deal with. Network components would
be certified in isolation but deployed in a complex interacting
environment. It is common to have several individual "secure"
components completely blow a security model when they are all forced
to interact with each other.

Sixth, I don't see how to combine this concept with security
practices. Today the biggest problem with firewalls is not how they're
built but how the user configures them.

How does a security rating take that into account? How does a security
rating take into account the people problem: users naively executing
e-mail attachments, or resetting passwords when a stranger calls and
asks them to? Simply prohibiting attachments will make a network more
secure.

And seventh, this kind of thing could easily fall into the trap of
bashing small products and protecting large products. A
little-discussed fact of computer security is that minority products
are more secure than popular products for the simple reason that there
aren't as many exploits for them.

But the unpopularity of those products might make it difficult for
them to pay for evaluation. And can major vendors be held to the same
standards of everyone else? It will take a lot of organizational
fortitude to fail Microsoft's security, for example.

This is not to say that there's no hope. I believe that the insurance
industry will eventually drive network security, and that some sort of
independent testing is inevitable. But I don't think that providing a
rating or a seal of approval is possible anytime soon.

Proceed with caution Even so, the Center for Internet Security is
tackling the challenge. Unfortunately, I don't particularly like what
I see so far (although admittedly, I haven't seen much). Looking at
the group's Web site, it seems more like a marketing scheme than
anything else. A security supplier or consulting organization can
spend $25,000 to become a member. (Organizations that use security can
join for only $7,000.)

Benefits include "your organization's name ... on Center brochures and
benchmarks documents," "your organization's name included on the
Center's Web site," and "the privilege of using the Center's logo on
your Web site." The last time I checked, there were 71 charter
members.

The group's initial push is to consolidate a bunch of the mediocre
security requirements documents out there (such as BS7799) and come up
with a "final set of minimum benchmarks to be used as a basis for
demonstrating due care," and to create a suite of tests that can give
computer owners some kind of security rating or feeling of confidence.

I see ideas like this as part of the Citadel model of security, as
opposed to the Insurance model. The Citadel model basically says: "If
you have this stuff and do these things, you'll be safe." The
Insurance model says: "Inevitably things will go wrong, so you need to
plan for what happens when they do."

In theory, the Citadel model is a much better model than the
pessimistic, fatalistic Insurance model. But in practice, no one has
ever built a citadel that is both functional and dependable.

My worry is that this group will become yet another
"extort-a-standard" body that charges companies for a seal of
approval. I belive that the people behind the Center for Internet
Security have completely pure motives; you can be an ethical
extortionist with completely honorable intentions. What makes it
extortion is the detriment from not paying. If you don't have the
"Security Seal of Approval," then -- tsk, tsk! -- you're just not
concerned about security.

ISN is hosted by SecurityFocus.com
---
To unsubscribe email LISTSERV () SecurityFocus com with a message body of
"SIGNOFF ISN".


Current thread: