Secure Coding mailing list archives

Standards for security


From: Gene Spafford <spaf () cerias purdue edu>
Date: Sun, 11 Jan 2004 16:27:07 +0000

I have noted several of the comments on this list from people about 
the need for standards for security, either for certification or for 
development.


Historically, attempts at setting such standards have had limited 
success.   The Orange Book (and the "Rainbow Series" in general) 
presented some very good guidelines for a particular kind of 
environment (mainframe, limited networking, cleared personnel). 
However, it turned out that the community didn't have development 
methods that were fast enough and cheap enough to compete with the 
consumer market, so those standards were eventually abandoned.   Some 
extremely secure systems have been developed using those standards -- 
S/COMP, GEMSOS, PSOS and even Multics, to name a few.  That they are 
not widely used now is a result of economic factors more than 
anything else.    Some standards are still in use in the same 
environments (e.g., the DITSCAP, see 
<http://iase.disa.mil/ditscap/>), which is a good thing considering 
the sensitivity of those environments.


The NRC "Computers at Risk" study (still a valid and comprehensive 
look at some of our problems) articulated the need for security 
training and standards.   The GASSP came out of that, and represent a 
useful set of standards to support security.  (See 
<http://web.mit.edu/security/www/gassp1.html>.)   Unfortunately, 
those were not widely articulated, particularly in educational 
programs.  The fact that so much discussion has gone on in this list 
without someone even mentioning them indicates how so many people 
mistakenly equate the problem of code quality with security.  That is 
largely caused by the "noise" right now, I suspect.  Too many reports 
about buffer overflows leads the average person to believe that is 
the biggest (or only) security threat.   Comparing flaws in IIS to 
Apache leads one to believe that Microsoft is at fault for most 
security problems.     Repeat something often enough without applying 
the scientific method and it must be true, right?    A lead ball must 
drop faster than a wooden ball because it is heavier, and breaking a 
mirror brings bad luck, and vitamin C prevents colds, and open source 
is more secure than proprietary code.   Anecdotal experience does not 
prove general truths -- but it does lead to superstition.  And a lack 
of knowledge of the field (e.g., DITSCAP, GASSP, PSOS, CMM) 
reinforces shallow opinions.


A software system cannot be evaluated for security if it is faulty -- 
it automatically fails.   Arguments about issues of buffer overflow, 
argument validation, and bad pointer arithmetic are arguments over 
quality of implementation.  Those things will cause software to fail 
under circumstances that are not related to security policy -- they 
aren't security-specific, they are issues of quality.      That 
doesn't make them unimportant, but it does distract people from 
thinking about the big picture of security.


Suppose that you had your software written in a fully type-safe 
language by expert programmers.   There are no buffer overflows. 
There are no pointers.   All error conditions are caught and handled. 
Is it secure?    Well, that question is silly to ask, because 
security is an absolute across all contexts and uses.   In the 
strictest sense, no system can ever be truly secure.


We CAN ask "Is it trustworthy to use in context A against security 
policy B?"   Then we need to evaluate it against a  set of questions: 
The lack of coding errors does not give us an automatic answer.   We 
might need to ask about auditing, about proper authentication, about 
media residue handling, about distribution pathways, about the 
connectivity to removable media and networks, about data provenance, 
and about dozens of other properties that determine whether the 
software upholds policy B in context A.     Safe coding does not 
equal security or trustworthiness.   People who REALLY understand 
information security understand this basic maxim.


The topic of this list, "secure coding" is, in a sense, a misnomer. 
"Safe coding"  or "Quality coding" would be a better term.   Security 
is a property supported by design, operation, and monitoring....of 
correct code.   When the code is incorrect, you can't really talk 
about security.  When the code is faulty, it cannot be safe.  Of 
course, that harkens back to my earlier posts about requirements and 
specifications.   You can't really tell if the code is faulty if you 
aren't precisely sure of what it is supposed to do.....


At the recent CRA Grand Challenges conference on security 
(<http://www.cra.org/Activities/grand.challenges/security/home.html>), 
several of us were bemoaning the fact that we have a generation who 
think that the ability to download tools to exploit buffer overflows 
makes them "security experts."     If nothing else,  could I request 
that the participants in this list keep from perpetuating that canard?


--spaf






Current thread: