Secure Coding mailing list archives

Harvard vs. von Neumann


From: james.stibbards at cloakware.com (James Stibbards)
Date: Thu, 14 Jun 2007 09:42:19 -0400

Hi Gary (good to see you at Gartner, BTW), 

I recall way back in the bad old days of the Orange Book that we used to
look for both Developmental Assurance and (emphasis here) Operational
Assurance.  To that end, systems are designed and implemented with certain
limitations or "assumptions" (shudder) about how they'll be operated.  In
terms of security, that might include that "unfriendlies" are not allow
physical access to the box itself, or other constraints. Perhaps this
"operational" aspect of things is just a part of implementation... But I've
seen several systems that were designed well, implemented well, and when
operated poorly (e.g. training didn't go over the necessary "secure boot"
operations), rendered the design and implementation moot.

Comments?

Best,
- James

-----Original Message-----
From: sc-l-bounces at securecoding.org [mailto:sc-l-bounces at securecoding.org]
On Behalf Of Gary McGraw
Sent: Wednesday, June 13, 2007 8:59 PM
To: 'Crispin Cowan'
Cc: 'SC-L at securecoding.org'; 'Blue Boar'
Subject: Re: [SC-L] Harvard vs. von Neumann

I am reminded of a (bottle of wine induced) argument I once had with dan
geer over whether a buffer overflow is a bug or a flaw.   We ultimately
realized that I was sitting in the app code looking at strcpy() and dan was
thinking of language architecture on a machine with innane memory layout.
We were both right...kind of.   Thing is, when it comes to misuse of really
pathetic string functions in C, most developers make bugs...

Of course there is a deep relation between bugs and flaws.   Unfortunately,
most software security discourse these days is stuck in the Web app bugs
only mud.  Any acknowledgement of higher level thinking is a good thing.

gem

company www.cigital.com
podcast www.cigital.com/silverbullet
blog www.cigital.com/justiceleague
book www.swsec.com



Sent from my treo.

 -----Original Message-----
From:   Crispin Cowan [mailto:crispin at novell.com]
Sent:   Monday, June 11, 2007 05:50 PM Eastern Standard Time
To:     Gary McGraw
Cc:     Blue Boar; SC-L at securecoding.org
Subject:        Re: [SC-L] Harvard vs. von Neumann

Gary McGraw wrote:
Though I don't quite understand computer science theory in the same way
that Crispin does, I do think it is worth pointing out that there are two
major kinds of security defects in software: bugs at the implementation
level, and flaws at the design/spec level.  I think Crispin is driving at
that point.

Kind of. I'm saying that "specification" and "implementation" are relative
to each other: at one level, a spec can say "put an iterative loop here" and
implementation of a bunch of x86 instructions. At another level,
specification says "initialize this array" and the implementation says "for
(i=0; i<ARRAY_SIZE;i++){...". At yet another level the specification says
"get a contractor to write an air traffic control system" and the
implementation is a contract :)

So when you advocate automating the implementation and focusing on
specification, you are just moving the game up. You *do* change properties
when you move the game up, some for the better, some for the worse. Some
examples:

    * If you move up to type safe languages, then the compiler can prove
      some nice safety properties about your program for you. It does
      not prove total correctness, does not prove halting, just some
      nice safety properties.
    * If you move further up to purely declarative languages (PROLOG,
      strict functional languages) you get a bunch more analyzability.
      But they are still Turing-complete (thanks to Church-Rosser) so
      you still can't have total correctness.
    * If you moved up to some specification form that was no longer
      Turing complete, e.g. something weaker like predicate logic, then
      you are asking the compiler to contrive algorithmic solutions to
      nominally NP-hard problems. Of course they mostly aren't NP-hard
      because humans can create algorithms to solve them, but now you
      want the computer to do it. Which begs the question of the
      correctness of a compiler so powerful it can solve general purpose
      algorithms.


If we assumed perfection at the implementation level (through better
languages, say), then we would end up solving roughly 50% of the software
security problem.

The 50% being rather squishy, but yes this is true. Its only vaguely what I
was talking about, really, but it is true.

Crispin

--
Crispin Cowan, Ph.D.               http://crispincowan.com/~crispin/
Director of Software Engineering   http://novell.com
        AppArmor Chat: irc.oftc.net/#apparmor


_______________________________________________
Secure Coding mailing list (SC-L) SC-L at securecoding.org List information,
subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
_______________________________________________



Current thread: