Secure Coding mailing list archives

Re: Programming languages -- the "third rail" of secure coding


From: Glenn and Mary Everhart <Everhart () gce com>
Date: Sun, 01 Aug 2004 19:29:31 +0100


Jeremy Epstein wrote:

Kevin Wall pointed to http://www2.latech.edu/~acm/HelloWorld.shtml as a good
source point; several of the languages I programmed in aren't listed (e.g.,
PL/360, which in many respects was to the IBM 360 as C was to the PDP/11).
Throughout the 1970s (and maybe even 1980s) a researcher named Jean Sammet
at IBM published a yearly list of what claimed to be all the programming
languages in use.  See
http://www.computerhistory.org/events/hall_of_fellows/sammet/ for more about
her.

To relate this to security, I "discovered" the concept of a buffer overrun
when writing PL/360 code back in 1978.  Languages that lack strong typing,
like PL/360 and C, clearly have a harder time being secure than those that
aren't.  And that's true of reliability as well.

So perhaps such a list would be interesting if one identified the
characteristics that make a language "good" from a security perspective
(several such lists have been posted to this list), and then correlate it to
some of the very long lists of languages.  That would at least give a
starting point for a discussion of "best"....

IMHO, though, any such effort is pointless.  The reality is that we're going
to be stuck with C/C++, Java, C#, FORTRAN, COBOL, and various
interpreted/scripting languages for a very long time.  Rather than argue
about what makes something good/better, we'd be better off figuring out how
to use them more effectively.

As engineers, we need "good enough", not perfection.



Perhaps this term "engineer" will cause mischief though. A civil engineer
(you know...one of those guys who takes the PE exam...) designs things according
to principles. They also get licensed once they convince examiners they can
design to correct mechanical engineering principles.

However nothing they do is designed to combat malicious attack. Someone takes
a field piece and blows the supports of a building to Kingdom come, and the
building engineer is NOT at fault.

There is a craft of fortress design. Like some software design now, it builds up
knowledge and has preferred practices, but rests on being able to invent 
defenses to whatever attacks people dream up. While a fortress designer should 
of course know what other fortress designers are up to, his success will be also 
determined by how inventively and effectively he can counter new threats. This 
is not something that exams are very effective at. It is also something that is 
very close to much of software work these days.


I recall when a buffer overflow would have been looked at in terms of some 
random line noise or error sending too much garbage down a line. Result usually 
would be a crash, and possibly restarted app seconds later, and it is entirely 
possible it might be no big deal. The problem is we are not dealing with a civil 
engineering type of environment where random accidents are the threat. The 
threats are humans who carefully design new threats. And we must be thinking of 
how to counter them. Unlike the building design case, too, it's hard to know 
when you're done when designing fortifications. (Consider how long the best 
Roman fortress in the world would stand up to a 21st century armed force; yet 
some of those forts were quite good when new.)


I don't see that something like "engineering" necessarily can be applied to 
security design. At any rate it could be useful to the uninformed not to lose 
sight of the "craftsmanship" that is daily work.


Glenn Everhart




Current thread: