Secure Coding mailing list archives

Re: Interesting article ZDNet re informal software development quality


From: "Brian Hetrick" <brian () brianhetrick com>
Date: Wed, 07 Jan 2004 15:55:08 +0000

On 6 Jan 2004 at 18:06, George Capehart wrote:

I agree with the sentiments.  I'd like to take them a step further,
though.  I spent a lot of time in the manufacturing environment and
led several BPR projects.  They all had a quality component to them,
but used different (quality) methodologies.  After a while I came to
believe that for quality (as for security), process is important.
The nature of the process is not anywhere nearly important as is the
discipline and focus of the process.  What I mean is this: Whether
it be waterfall, RUP, XP, or whatever, if quality(/security) is
important to the process, it will be there.  If quality(/security)
is *not* important to the process, it will not be there, even if the
process is CMM Level 5.

I have to agree with this.  Over the last several years, I have had
the opportunity to look at several dozen in-house applications built
by various companies -- ranging from utilities to software development
houses -- sized between 500 KLOC and 15 MLOC.  I have to say that any
development methodology beats none, and almost any formal methodology
beats almost any informal methodology.  I also have to say that the
behavior rewarded is the behavior obtained.  You cannot reward
invisible behavior that you do not notice, and unless an attribute is
measured, it is invisible.  The development process cannot reward
qualities it does not measure.  Most development processes are managed
by calendar date and budget.  In these cases, the behavior being
rewarded, the behavior being obtained, is minimizing development time
and cost, regardless of what management thinks or hopes is happening.

(One of the areas I have researched is whether static analysis of the
code produced by a development process can reveal the values embodied
in that process rather than the values given lip service by
management.  The answer, incidentally, is "yes.")

I think the real problem, though, is that security is different from
most quality attributes in a very fundamental way.  Security is an
anti-requirement, not a requirement.  A system being secure in a
particular way means the system cannot do something, rather than the
system can do something.  There are always an infinite number of
things to not do, so there are always an infinite number of security
requirements.  We are very good at constructing processes that produce
a particular desired outcome (which is what programming is); we are
much less good at producing processes that produce ONLY that
particular desired outcome.

The strncpy() C standard library routine has already been discussed on
this list.  As an example, though, consider an even simpler function:
strcpy(), which copies a source string to a destination string, and
stops at the first zero char encountered.  The canonical one-line
implementation of strcpy() is:

     char * strcpy (char * d, const char * s) {
         char r = d; while (* d ++ = * s ++) ; return r;
     }

This implementation is simple, straightforward, provably correct, and
wrong.  Why is it wrong?  Look carefully.  It stores past the end of
the destination string.

It doesn't?  Convince yourself.  The while loop exits as soon as the
terminating zero char is stored, so while d points past the end of the
destination string, that value is not dereferenced, and so the loop
does not store past the end of the destination string.

Except it does -- on machines not having byte granular storage.  On
such machines -- several RISC architectures spring to mind -- storing
a byte consists of fetching a word, manipulating the word to put the
appropriate byte value into the appropriate substring of bits, and
storing the word.  This creates a race condition with other processes
or threads manipulating the other bytes of the word.  This is a new --
or at least for most of us unexpected -- way for such simple code to
fail.

We are starting to get a handle on positive requirements, such as
"perform this computation."  Thanks to various luminaries such as
Knuth and Dijkstra, we have theory that can handle positive
requirements, even if we usually use (hopefully educated) intuition.
I think, though, that we do not yet have a theory that can address
negative requirements such as are needed in computer security.  This
is not to say we should abandon hope of a solution; but I think we
should not expect a solution to be either intuitive or simple.








Current thread: