Firewall Wizards mailing list archives

Re: Penetration testing via shrinkware


From: "Paul D. Robertson" <proberts () clark net>
Date: Mon, 21 Sep 1998 11:03:11 -0400 (EDT)

On Mon, 21 Sep 1998, Ted Doty wrote:

                                  While it isn't 100% foolproof, there's 
a lot to be learned from a B2 evaluation.  Security modeling, code 
walk-throughs, secure development methodologies, they all have their 
place if you're going to build assurance.  "After-the-fact" testing is 
always _much_ more blind than during "construction" testing.  Just as 
crystal boxes tend to be better than black boxes in that regard.  

ObDisclaimer: I survived two B1/2 evaluations (actually one Orange Book and
one ITSec), and I build a scanner.

The biggest problem in the evaluations was the quality of the evaluation
teams.  Teams with high caliber members, which operate together as a stable
team for extended periods might be effective.  What I saw was that you
could count on neither.

I'm pretty sure this has always been the case, however, vendors who go out
to play with the evaluations tend to do a lot more of the groundwork than
generic vendors.  If the traditional method of introducing such dicipline
isn't good enough, or doesn't provide enough flexibility, then maybe we
can agree on what would?

The upshot is that Orange Book methods can probably only be applied for
products which place a premium on reliability - for example, medical
applications.  These systems will always be more expensive if developed
under TCSec guidelines, and they will be upgraded with new features more
slowly.  This argues pretty strongly for less formal methods, such as peer
review, for most products.

Peer review would also work for me.  What I find disconcerting is that the
current state of the art is zero review.

I have never seen any data that suggest that formally-evaluated systems are
(much, if any) higher quality than non-evaluated systems.  If you make the
assumption that each of the Orange Book implementations of {security
modeling, code walk-throughs, secure development methodologies) will only
catch a portion of the defects, the industry may be better served by a less
formal/structured analysis combined with black box analysis.  By "better
served" I mean less expensive, easier to use products that provide roughly
equivalent levels of protection.

While black box scanners are really good at implementation verification, I
don't think they tend to be the best models of design verification.
Perhaps some others have some experiences scanning code for bad things
that they can share, I'm sure grepping for "^printf" can't be the best way
to find these things.

Then again, maybe not.  However, I won't be taking my scanner anywhere near
Orange Book until this is a *lot* more clear.  Your mileage, as always, may
vary.

The Orange/Red Book criteria are the only real model for secure code
development we have to work with.  I don't think that we should throw the
whole idea out because the implementation isn't ideal.  

I don't know, maybe I'm the only one who thinks we're doing a disservice
to ourselves in this regard.

Paul
-----------------------------------------------------------------------------
Paul D. Robertson      "My statements in this message are personal opinions
proberts () clark net      which may have no basis whatsoever in fact."
                                                                     PSB#9280



Current thread: