Secure Coding mailing list archives

BSIMM: Confessions of a Software Security Alchemist (informIT)


From: jim at manico.net (Jim Manico)
Date: Thu, 19 Mar 2009 08:31:09 -1000

The top N lists we observed among the 9 were BUG lists only.  So that 
means that in general at least half of the defects were not being 
identified on the "most wanted" list using that BSIMM set of activities.

This sounds very problematic to me. There are many standard software bugs 
that are much more critical to the enterprise than just security bugs. It 
seems foolhardy to do risk assessment on security bugs only - I think we 
need to bring the worlds of software development and security analysis 
together more. Divided we (continue to) fail.

And Gary, this is not a critique of just your comment, but of WebAppSec at 
large.

- Jim


----- Original Message ----- 
From: "Gary McGraw" <gem at cigital.com>
To: "Steven M. Christey" <coley at linus.mitre.org>
Cc: "Sammy Migues" <SMigues at cigital.com>; "Michael Cohen" 
<MCohen at cigital.com>; "Dustin Sullivan" <dustin.sullivan at informit.com>; 
"Secure Code Mailing List" <SC-L at securecoding.org>
Sent: Thursday, March 19, 2009 2:50 AM
Subject: Re: [SC-L] BSIMM: Confessions of a Software Security Alchemist 
(informIT)


Hi Steve,

Sorry for falling off the thread last night.  Waiting for the posts to 
clear was not a great idea.

The top N lists we observed among the 9 were BUG lists only.  So that 
means that in general at least half of the defects were not being 
identified on the "most wanted" list using that BSIMM set of activities. 
You are correct to point out that the "Architecture Analysis" practice has 
other activities meant to ferret out those sorts of flaws.

I asked my guys to work on a flaws collection a while ago, but I have not 
seen anything yet.  Canuck?

There is an important difference between your CVE data which is based on 
externally observed bugs (imposed on vendors by security types mostly) and 
internal bug data determined using static analysis or internal testing.  I 
would be very interested to know whether Microsoft and the CVE consider 
the same bug #1 on internal versus external rating systems.  The 
difference is in the term "reported for" versus "discovered internally 
during SDL activity".

gem

http://www.cigital.com/~gem


On 3/18/09 6:14 PM, "Steven M. Christey" <coley at linus.mitre.org> wrote:



On Wed, 18 Mar 2009, Gary McGraw wrote:

Many of the top N lists we encountered were developed through the
consistent use of static analysis tools.

Interesting.  Does this mean that their top N lists are less likely to
include design flaws?  (though they would be covered under various other
BSIMM activities).

After looking at millions of lines of code (sometimes constantly), a
***real*** top N list of bugs emerges for an organization.  Eradicating
number one is an obvious priority.  Training can help.  New number
one...lather, rinse, repeat.

I believe this is reflected in public CVE data.  Take a look at the bugs
that are being reported for, say, Microsoft or major Linux vendors or most
any product with a long history, and their current number 1's are not the
same as the number 1's of the past.

- Steve


_______________________________________________
Secure Coding mailing list (SC-L) SC-L at securecoding.org
List information, subscriptions, etc - 
http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
_______________________________________________




Current thread: