Secure Coding mailing list archives

BSIMM: Confessions of a Software SecurityAlchemist(informIT)


From: jim at manico.net (Jim Manico)
Date: Sat, 21 Mar 2009 13:26:21 -1000

Hey John,

I like where your head is at - great list.

Regarding:

Builds adaptors so that bugs are automatically entered in tracking systems

Does the industry have:

1) A standard schema for findings, root causes, vulnerabilities, etc, and 
the inter-relation of these key terms (and others?)
2) Standardized API's for allowing different risk systems for correlate this 
data?

Or is it, right now, mostly proprietary glue? Curious...

Also, how do you build adaptors so that manual processes are automatically 
entered in a tracking system? Are you just talking about content management 
ststems to make it easy to manual reviewers to enter data into rosk 
mangement software?

Anyhow, I like where your head is at and it definately got me thinking.

 - Jim

----- Original Message ----- 
From: "Tom Brennan - OWASP" <tomb at owasp.org>
To: "John Steven" <jsteven at cigital.com>; <sc-l-bounces at securecoding.org>; 
"Benjamin Tomhave" <list-spam at secureconsulting.net>; "Secure Code 
MailingList" <SC-L at securecoding.org>
Sent: Friday, March 20, 2009 10:37 AM
Subject: Re: [SC-L] BSIMM: Confessions of a Software 
SecurityAlchemist(informIT)


John Stevens for Cyber Czar!

I have "Elect J.Stevens" bumper stickers printing, I retooled my Free 
Kevin sticker press.

Well stated ;) have a great weekend!

-----Original Message-----
From: John Steven <jsteven at cigital.com>

Date: Fri, 20 Mar 2009 14:35:01
To: Benjamin Tomhave<list-spam at secureconsulting.net>; Secure Code 
MailingList<SC-L at securecoding.org>
Subject: Re: [SC-L] BSIMM: Confessions of a Software Security Alchemist
(informIT)


Tom, Ben, All,

I thought I'd offer more specifics in an attempt to clarify. I train 
people here to argue your position Ben: security vulnerabilities don't 
count unless they affect development.   To this end, we've specifically 
had success with the following approaches:

[Integrate Assessment Practices]
   [What?]
Wrap the assessment activities (both tool-based and manual techniques) in 
a process that:
   * Normalizes findings under a common reporting vocabulary and 
demonstrates impact
       * Include SAST, DAST, scanning, manual, out-sourced, & ALL findings 
producers in this framework
   * Anchors findings in either a developmental root cause or other 
software artifact:
       * Use Case, reqs, design, spec, etc.
   * Builds adaptors so that bugs are automatically entered in tracking 
systems
       * Adaptors should include both tool-based and manual findings
   * Calculates impact with an agreed-upon mechanism that rates security 
risk with other  factors:
       * Functional release criteria
       * Other non-security non-functional requirements

   [Realistic?]
I believe so. Cigital's more junior consultants work on these very tasks, 
and they don't require an early-adopter to fund or agree to them.  There's 
plenty of tooling out there to help with the adapters and plenty of 
presentations/papers on risk (http://www.riskanalys.is), normalizing 
findings ( http://cwe.mitre.org/.) , and assessment methodology 
(http://www.cigital.com/papers/download/j15bsi.pdf).

   [Panacea?]
No. I've done research and consulting in functional testing. If you think 
security is powerless against development, try spending a few years in a 
tester's shoes! Functionality may be king for development and PMs, but 
I've found that functional testing has little to no power. While a lack of 
features may prevent software from going out the door, very rarely do I 
find that functional testing can implement an effective "go/no-go" gate 
from their seat in the org. That's why testing efforts seek muscle from 
their friend Security (and its distant cousins under quality "Load and 
Performance") to stop releases from going out the door.

There's no reason NOT to integrate with testing efforts, reporting, and 
groups: we should. There's every reason security should adhere to the same 
interface everyone else does with developers (let them produce code and 
let them consume bugs)... I think the steps I outlined under 'what' bring 
us closer. I enjoyed Guy's book, but I don't think we need to (or can 
expect to) flip organizational precedent and behavior on its head to make 
progress.

[Steering]
The above scenario  doesn't allow explicitly for two key input/outputs 
from the software security ecosystem:


1.  Handling ultra-high-priority issues in real time
2.  Adjusting and evolving to changing threat landscapes

I've long suggested establishing a steering committee for this.

   [What?]
Establish a steering committee on which a software security, dev, 
architecture, operations, and corporate risk sit. These folk should manage 
the risk-model, scoring, security standards that drive the assessment 
verification standard, and the definition of both short-term and 
longer-term mitigating strategies. I'd argue that you'd like Industry 
representation too. That organization could come in written form (like the 
Top N lists) or in the form of consulting or a panel.

When incidents or firefights come into play, absolutely allow them to be 
handled out of band (albeit through a documented process), but! Not until 
they've been rated with the agreed-upon model.

   [Realistic?]
Yes. I have several clients that use this structure. I speak with 
non-clients that do the same. Data gathering for scoring and 
prioritization is easy if you've done the steps in the previous section. 
The operations guys help you grade the pertinence of your efforts to what 
they're seeing 'in the wild' too.

   [Panacea?]
Does a steering committee help you respond with agility to a high-priority 
threat in real time? Not explicitly. But, it does help if your 
organizational stakeholders already have a working relationship and a 
mutual respect.  Also: I think one root cause of the underlying discomfort 
(or dislike) with people's perspectives on this thread has been:

"OK, Fine Gary... you don't like Top N lists... So what do you do?"

Well, in my mind... The above answers that question.

[Assessment and Tools]
Do I believe that the normalized findings will emerge only from static 
analysis (or any other kind of vulnerability detection tool)? Absolutely 
not. People who follow my writing know I expect dramatic(ally high and 
low) numbers to be associated with tools. Let's summarize my data. 
Organizations can expect:


*   Static analysis tools to account for 15-20% of their total findings, 
out of the box
*   An initial false positive rate as high as 75-99% from a static 
analysis tool, without tuning
*   Less than 09% code coverage (by even shallow coverage metrics) from 
pen-testing tools

Qualitatively, I can tell you that I expect an overwhelming majority of 
static analysis results produced in an organization to come from 
customization of their adopted product.

Simply: if you base your  world view on only those things a tool (any 
tool) produces, you're world view is as narrow as a Neo-con's-and will 
prove about as ineffective. The same is true of those who narrow their 
scope to the OWASP Top-10 or the SANS Top 25.

[Top N Redux]
Some have left the impression that starting with a Top N list is of no 
use. Please  don't think I'm in this camp.  In my last two presentations 
I've indicated, "If you're starting from scratch these lists (or lists 
intrinsically baked into a tool's capabilities for detection) are a great 
place to start." and if you can't afford frequent industry interaction-use 
Top N lists as a proxy for it. They're valuable, but like anything, only 
to a point.

For me, this discussion will remain circular until we think about it in 
terms of measured, iterative organizational improvement. Why? Because when 
an organization focuses on getting beyond a "Top N" list it will just 
create their own organization-specific "Top N" list :-) If they're smart 
though, they'll call it a dash board and vie for a promotion ;-)

From the other side? People building Top N lists know they're not a 
panacea, but also know that a lot of organizations simply can't stomach 
the kind of emotional investment that bsimm (and the ilk) come with.

This leaves me with the following:

[Conclusions]
Top N lists are neither necessary nor sufficient for organization success
Top N lists are necessary but not sufficient for industry success
Maturity models are neither necessary nor sufficient for organizational 
success
Maturity models are necessary but not sufficient for industry success

Always avail yourself of what the industry produces;
Never confine yourself to a single industry artifact dogmatically;
Whatever you consume from industry, improve it by making it your own;
Where-ever your are in your journey, continue to improve iteratively.

[Related Perennial Rabbit Holes] (bonus)
Bugs vs. Flaws: John Steven'06 - 
http://www.mail-archive.com/sc-l at securecoding.org/msg00888.html
Security Vs. Quality: Cowan '02 - 
http://www.securityfocus.com/archive/98/304766

----
John Steven
Senior Director; Advanced Technology Consulting
Direct: (703) 404-5726 Cell: (703) 727-4034
Key fingerprint = 4772 F7F3 1019 4668 62AD  94B0 AE7F EEF4 62D5 F908

Blog: http://www.cigital.com/justiceleague
Papers: http://www.cigital.com/papers/jsteven

http://www.cigital.com
Software Confidence. Achieved.


On 3/19/09 7:28 PM, "Benjamin Tomhave" <list-spam at secureconsulting.net> 
wrote:

Why are we differentiating between "software" and "security" bugs? It
seems to me that all bugs are software bugs, and how quickly they're
tackled is a matter of prioritizing the work based on severity, impact,
and ease of resolution. It seems to me that, while it is problematic
that security testing has been excluded historically, our goal should
not be to establish yet another security-as-bolt-on state, but rather
leapfrog to the desired end-state where QA testing includes security
testing as well as functional testing. In fact, one could even argue
that security testing IS functional testing, but anyway...

If you're going to innovate, you must as well jump the curve*.

-ben

* see Kawasaki "Art of Innovation"
http://blog.guykawasaki.com/2007/06/art_of_innovati.html

Gary McGraw wrote:
Aloha Jim,

I agree that security bugs should not necessarily take precedence
over other bugs.  Most of the initiatives that we observed cycled ALL
security bugs into the standard bug tracking system (most of which
rank bugs by some kind of severity rating).  Many initiatives put
more weight on security bugs...note the term "weight" not "drop
everything and run around only working on security."  See the CMVM
practice activities for more.

The BSIMM helps to measure and then evolve a software security
initiative.  The top N security bugs activity is one of an arsenal of
tools built and used by the SSG to strategically guide the rest of
their software security initiative.  Making this a "top N bugs of any
kind" list might make sense for some organizations, but is not
something we would likely observe by studying the SSG and the
software security initiative.  Perhaps we suffer from the "looking
for the keys under the streetlight" problem.

gem


On 3/19/09 2:31 PM, "Jim Manico" <jim at manico.net> wrote:

The top N lists we observed among the 9 were BUG lists only.  So
that means that in general at least half of the defects were not
being identified on the "most wanted" list using that BSIMM set of
activities.

This sounds very problematic to me. There are many standard software
bugs that are much more critical to the enterprise than just security
bugs. It seems foolhardy to do risk assessment on security bugs only
- I think we need to bring the worlds of software development and
security analysis together more. Divided we (continue to) fail.

And Gary, this is not a critique of just your comment, but of
WebAppSec at large.

- Jim


----- Original Message ----- From: "Gary McGraw" <gem at cigital.com>
To: "Steven M. Christey" <coley at linus.mitre.org> Cc: "Sammy Migues"
<SMigues at cigital.com>; "Michael Cohen" <MCohen at cigital.com>; "Dustin
Sullivan" <dustin.sullivan at informit.com>; "Secure Code Mailing List"
<SC-L at securecoding.org> Sent: Thursday, March 19, 2009 2:50 AM
Subject: Re: [SC-L] BSIMM: Confessions of a Software Security
Alchemist (informIT)


Hi Steve,

Sorry for falling off the thread last night.  Waiting for the posts
to clear was not a great idea.

The top N lists we observed among the 9 were BUG lists only.  So
that means that in general at least half of the defects were not
being identified on the "most wanted" list using that BSIMM set of
activities. You are correct to point out that the "Architecture
Analysis" practice has other activities meant to ferret out those
sorts of flaws.

I asked my guys to work on a flaws collection a while ago, but I
have not seen anything yet.  Canuck?

There is an important difference between your CVE data which is
based on externally observed bugs (imposed on vendors by security
types mostly) and internal bug data determined using static
analysis or internal testing.  I would be very interested to know
whether Microsoft and the CVE consider the same bug #1 on internal
versus external rating systems.  The difference is in the term
"reported for" versus "discovered internally during SDL activity".

gem

http://www.cigital.com/~gem


On 3/18/09 6:14 PM, "Steven M. Christey" <coley at linus.mitre.org>
wrote:



On Wed, 18 Mar 2009, Gary McGraw wrote:

Many of the top N lists we encountered were developed through the
 consistent use of static analysis tools.
Interesting.  Does this mean that their top N lists are less likely
to include design flaws?  (though they would be covered under
various other BSIMM activities).

After looking at millions of lines of code (sometimes
constantly), a ***real*** top N list of bugs emerges for an
organization.  Eradicating number one is an obvious priority.
Training can help.  New number one...lather, rinse, repeat.
I believe this is reflected in public CVE data.  Take a look at the
bugs that are being reported for, say, Microsoft or major Linux
vendors or most any product with a long history, and their current
number 1's are not the same as the number 1's of the past.

- Steve


_______________________________________________ Secure Coding
mailing list (SC-L) SC-L at securecoding.org List information,
subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l List
charter available at - http://www.securecoding.org/list/charter.php
 SC-L is hosted and moderated by KRvW Associates, LLC
(http://www.KRvW.com) as a free, non-commercial service to the
software security community.
_______________________________________________




_______________________________________________ Secure Coding mailing
list (SC-L) SC-L at securecoding.org List information, subscriptions,
etc - http://krvw.com/mailman/listinfo/sc-l List charter available at
- http://www.securecoding.org/list/charter.php SC-L is hosted and
moderated by KRvW Associates, LLC (http://www.KRvW.com) as a free,
non-commercial service to the software security community.
_______________________________________________



--
Benjamin Tomhave, MS, CISSP
falcon at secureconsulting.net
LI: http://www.linkedin.com/in/btomhave
Blog: http://www.secureconsulting.net/
Photos: http://photos.secureconsulting.net/
Web: http://falcon.secureconsulting.net/

[ Random Quote: ]
"Concern for man and his fate must always form the chief interest of all
technical endeavors. Never forget this in the midst of your diagrams and
equations."
Albert Einstein
_______________________________________________
Secure Coding mailing list (SC-L) SC-L at securecoding.org
List information, subscriptions, etc - 
http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
_______________________________________________


_______________________________________________
Secure Coding mailing list (SC-L) SC-L at securecoding.org
List information, subscriptions, etc - 
http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
_______________________________________________

_______________________________________________
Secure Coding mailing list (SC-L) SC-L at securecoding.org
List information, subscriptions, etc - 
http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
_______________________________________________




Current thread: