Secure Coding mailing list archives

Secure programming is NOT just good programming


From: leichter_jerrold at emc.com (Leichter, Jerry)
Date: Thu, 12 Oct 2006 15:19:07 -0400 (EDT)

| Here are some practices you should typically be doing
| if you're worried about security, and note that many are
| typically NOT considered "good programming"
| by the general community of software developers:
| * You need to identify your threats that you'll counter (as requirements)
| * Design so that the threats are countered, e.g., mutually
|    suspicious processes, small trusted computer base (TCB), etc.
| * Choose programming languages where you're less likely to
|    have security flaws, and where you can't (e.g., must use C/C++), use extra
|    security scanning tools and warning flags to reduce the risk.
| * Train on the specific common SECURITY failures of that
|    language, so you can avoid them. (E.G., gets() is verbotin.)
| * Have peer reviews of the code, so that others can help find
|    problems/vulnerabilities.
| * Test specifically for security properties, and use fuzz testing
|    rigged to test for them.
| Few of these are done, particularly the first two. I'll concede
| that many open source software projects do peer reviews, 
Actually, I wouldn't even concede that.  Checking for security properties
is not the same thing as checking for correct implementation of the
intended functionality (unless your spec of the intended functionality
is extraordinarly good).  It takes practice to even know what to look
for in a security-oriented peer review.  To take one trivial example:
Since there is never enough time to do a really complete review, most
reviewers will naturally emphasize the paths executed in the common
use cases.  They will de-emphasize, and often outright ignore, the
rare cases and even more so the error paths.  Unfortunately, it's
exactly in those rare or "can't happen" or "bad error" paths that
most security bugs lie.

|                                                          but you
| really want ALL of these practices.
| 
| Next, "Most books teach good programming." Pooey, though I wish they did.
| I still find buffer overflows in examples inside books on C/C++.
| I know the first version of K&R used scanf("...%s..."..) without noting
| that you could NEVER use this on untrusted input; I think the
| second edition used gets() without commenting on its security problems.
| A typical PHP book is littered with examples that are XSS disasters.
I agree 100% here.  At best, you'll get one example of how you should
"test your inputs for reasonable values".

| The "Software Engineering Body of Knowledge" is supposed to
| summarize all you need to know to develop big projects.. yet
| it says essentially NOTHING about secure programming
| (it presumes that all programs are stand-alone, and never connect
| to an Internet or use data from an Internet - a ludicrous assumption).
| (P.S. I understand that it's being updated, hopefully it'll correct this.)
| 
| I'd agree that "check your inputs" is a good programming
| practice, and is also critically important to secure programming.
| But it's not enough, and what people think of when you say
| "check your inputs" is VERY different when you talk to security-minded
| people vs. the general public.
| 
| One well-known book (I think it was "Joel on Software") has some
| nice suggestions, but strongly argues that you should accept
| data from anywhere and just run it (i.e., that you shouldn't
| treat data and code as something separate). It claimed that sandboxing
| is a waste of time, and not worthwhile, even when running code from
| arbitrary locations... just ask the user if it's okay or not
| (we know that users always say "yes"). When that author thinks
| "check your inputs", he's thinking "check the syntax" -
| not "prevent damage".  This is NOT a matter of "didn't implement it
| right" - the program is working AS DESIGNED.  These programs
| are SPECIALLY DESIGNED to be insecure.  And this was strongly
| argued as a GOOD programming practice.
The classic example of checking inputs you find in beginner's texts is
for a program that computes the area of a triangle given its three
sides.  The "careful" program checks that the three sides do, indeed,
describe a triangle!  OK - but I have yet to see a worked version of
this problem that checks for overflows.  And once that example is past,
all the rest of the examples "focus on the issue at hand", and ignore
the messy realities of data validation, errors, and so on.

|  > People just don't care.
| 
| There, unfortunately, we agree.  Though there's hope for the future.
Trying to get people to write safe code, or check for safety properties,
is not a practical solution.  In the languages we have, *some* people
can do it *some* of the time.  But it's simply not the kind of thing
that people are good at:  It requires close concentration on "irrelevant"
details.  You can't get tired; you can't get distracted.  Assuming
this kind of thing simply reveals a complete misunderstanding of the
way the human mind works.

The only way forward is by having the *computer* do this kind of
thing for us.  The requirements of the task are very much like those
of low-level code optimization:  We leave that to the compilers today,
because hardly anyone can do it well at all, much less competitively
with decent code generators, except in very special circumstances.
Code inspection tools are a necessary transitional step - just as
Purify-like tools are an essential transitional step to find memory
leaks in code that does manual storage management.  But until we can
figure out how to create safer *languages* - doing for security what
garbage collection does for memory management - we'll always be
several steps behind.
                                                        -- Jerry



Current thread: