Secure Coding mailing list archives

Re: Standards for security


From: "Jeff Williams @ Aspect" <jeff.williams () aspectsecurity com>
Date: Mon, 12 Jan 2004 16:03:07 +0000

I've always found that thinking about features and assurance separately (as
in the Orange Book and CC) is useful here. Spaf correctly points out that
features are really what enforce policy, and are in one sense much more
tightly bound to "security" than buffer overflows et al.  And I fully agree
that most organizations desperately need to articulate a security policy for
their application.

Still, I've found that organizations have much more trouble with assurance
than with features. The features are no different than any other
requirement. What's different about security is that you need a level of
assurance that the application will do what it's intended to do and nothing
more. Organizations should have a frank discussion about what level of
assurance is appropriate for each project.

I recently spoke at a meeting of the security officers at all the major
telecom companies about assurance and the damage a malicious developer could
do. They appreciated the input, but frankly stated that they couldn't do
anything in their organization until other organizations did first.


In that context, it's not surprising that the standards you've listed have
failed, as they are all abstract and require a lot of security expertise to
apply.  For anything to change the way software is developed in this
country, it has to change the *market* for application software.

I'm thrilled that the FTC is going after companies like GUESS and PETCO for
having SQL injection flaws. They're pointing to the OWASP Top Ten as
evidence that there is a trade practice. This is the kind of standard that
works -- easy enough for lawyers to understand and apply in court.

Anyway, my guess is that the market is changing like the automobile industry
did 30 years ago. Today you can't buy a car without side impact protection,
airbags, crumple zones, seatbelts, and god knows what else. As soon as a few
companies start making a big deal about security (and Microsoft is already),
the rest of the market won't be too far behind.

--Jeff

Jeff Williams
Aspect Security
http://www.aspectsecurity.com


----- Original Message ----- 
From: Gene Spafford
To: [EMAIL PROTECTED]
Sent: Saturday, January 10, 2004 1:43 PM
Subject: [SC-L] Standards for security


I have noted several of the comments on this list from people about
the need for standards for security, either for certification or for
development.

Historically, attempts at setting such standards have had limited
success.   The Orange Book (and the "Rainbow Series" in general)
presented some very good guidelines for a particular kind of
environment (mainframe, limited networking, cleared personnel).
However, it turned out that the community didn't have development
methods that were fast enough and cheap enough to compete with the
consumer market, so those standards were eventually abandoned.   Some
extremely secure systems have been developed using those standards -- 
S/COMP, GEMSOS, PSOS and even Multics, to name a few.  That they are
not widely used now is a result of economic factors more than
anything else.    Some standards are still in use in the same
environments (e.g., the DITSCAP, see
<http://iase.disa.mil/ditscap/>), which is a good thing considering
the sensitivity of those environments.

The NRC "Computers at Risk" study (still a valid and comprehensive
look at some of our problems) articulated the need for security
training and standards.   The GASSP came out of that, and represent a
useful set of standards to support security.  (See
<http://web.mit.edu/security/www/gassp1.html>.)   Unfortunately,
those were not widely articulated, particularly in educational
programs.  The fact that so much discussion has gone on in this list
without someone even mentioning them indicates how so many people
mistakenly equate the problem of code quality with security.  That is
largely caused by the "noise" right now, I suspect.  Too many reports
about buffer overflows leads the average person to believe that is
the biggest (or only) security threat.   Comparing flaws in IIS to
Apache leads one to believe that Microsoft is at fault for most
security problems.     Repeat something often enough without applying
the scientific method and it must be true, right?    A lead ball must
drop faster than a wooden ball because it is heavier, and breaking a
mirror brings bad luck, and vitamin C prevents colds, and open source
is more secure than proprietary code.   Anecdotal experience does not
prove general truths -- but it does lead to superstition.  And a lack
of knowledge of the field (e.g., DITSCAP, GASSP, PSOS, CMM)
reinforces shallow opinions.

A software system cannot be evaluated for security if it is faulty -- 
it automatically fails.   Arguments about issues of buffer overflow,
argument validation, and bad pointer arithmetic are arguments over
quality of implementation.  Those things will cause software to fail
under circumstances that are not related to security policy -- they
aren't security-specific, they are issues of quality.      That
doesn't make them unimportant, but it does distract people from
thinking about the big picture of security.

Suppose that you had your software written in a fully type-safe
language by expert programmers.   There are no buffer overflows.
There are no pointers.   All error conditions are caught and handled.
Is it secure?    Well, that question is silly to ask, because
security is an absolute across all contexts and uses.   In the
strictest sense, no system can ever be truly secure.

We CAN ask "Is it trustworthy to use in context A against security
policy B?"   Then we need to evaluate it against a  set of questions:
The lack of coding errors does not give us an automatic answer.   We
might need to ask about auditing, about proper authentication, about
media residue handling, about distribution pathways, about the
connectivity to removable media and networks, about data provenance,
and about dozens of other properties that determine whether the
software upholds policy B in context A.     Safe coding does not
equal security or trustworthiness.   People who REALLY understand
information security understand this basic maxim.

The topic of this list, "secure coding" is, in a sense, a misnomer.
"Safe coding"  or "Quality coding" would be a better term.   Security
is a property supported by design, operation, and monitoring....of
correct code.   When the code is incorrect, you can't really talk
about security.  When the code is faulty, it cannot be safe.  Of
course, that harkens back to my earlier posts about requirements and
specifications.   You can't really tell if the code is faulty if you
aren't precisely sure of what it is supposed to do.....

At the recent CRA Grand Challenges conference on security
(<http://www.cra.org/Activities/grand.challenges/security/home.html>),
several of us were bemoaning the fact that we have a generation who
think that the ability to download tools to exploit buffer overflows
makes them "security experts."     If nothing else,  could I request
that the participants in this list keep from perpetuating that canard?

--spaf








Current thread: