Secure Coding mailing list archives

Interesting tidbit in iDefense Security Advisory 06.26.07


From: dwheeler at ida.org (David A. Wheeler)
Date: Thu, 28 Jun 2007 14:47:30 -0400

On the comment:
| I am not disagreeing with the fact the static source analysis is a
| good thing, I am just saying that this is a case where it failed (or
| maybe the user/developer of it failed or misunderstood it's use). Fair
| enough that on this particular list you are going to defend source
| analysis over any other method, it is about secure coding after all,
| but I definitely still strongly disagree that other methods wouldn't
| have found this bug.

Actually, I am _not_ of the opinion that analysis tools are always 
"better" than any other method.  I don't really believe in a silver 
bullet, but if I had to pick one, "developer education" would be my 
silver bullet, not analysis tools.  (Basically, a "fool with a tool is 
still a fool".)  I believe that for secure software you need a SET of 
methods, and tool use is just a part of it.

That said, I think tools that search for vulnerabilities usually need to 
be PART of the answer for secure software in today's world.  Customers 
are generally _unwilling_ to reduce the amount of functionality they 
want to something we can easily prove correct, and formally proving 
programs correct has not scaled well yet (though I commend the work to 
overcome this).   No language can prevent all vulnerabilities from being 
written in the first place.  Human review is _great_, but it's costly in 
many circumstances and it often misses things that tools _can_ pick up. 
So we end up needing analysis tools as part of the process, even though 
current tools have a HUGE list of problems.... because NOT using them is 
often worse. Other methods may have found the bug, but other methods 
typically don't scale well.

As with almost everything in software engineering, sad to say, there is
very little objective evidence.  It's hard and expensive to gather, and
those who are in a position to pay for the effort rarely see a major
payoff from making it.

This is a serious problem with software development as a whole; there's 
almost no science behind it at all.   Science requires repeatable 
experiments with measurable results, but most decisions in software 
development are based on guesses and fads, not hard scientific data.

I'm not saying educated guesses are always bad; when you don't have 
data, and you need to do something NOW, educated guesses are often the 
best you can do... and we'll never know EVERYTHING.  The problem is that 
we rarely perform or publish the experiments to get better, so we don't 
make real PROGRESS.  And we don't do the experiments in part because few 
organizations fund actual, publishable scientific work to determine 
which software development approaches work best in software development. 
  (There are exceptions, but it sure isn't the norm.)  We have some good 
science on very low-level stuff like big-O/complexity theory, syntactic 
models, and so on - hey, we can prove trivial programs correct! But 
we're hopeless if you want to use science to make decisions about 50 
million line programs.  Which design approach or processes are best, and 
for what purpose - and can you show me the experimental results to 
justify the claim?  There IS some, but not much.  We lack the scientific 
information necessary to make decisions about many real-world (big) 
applications, and what's worse, we lack a societal process to grow that 
pool of information.  I've no idea how to fix that.

--- David A. Wheeler




Current thread: