Secure Coding mailing list archives

BSIMM: Confessions of a Software SecurityAlchemist(informIT)


From: jsteven at cigital.com (John Steven)
Date: Tue, 24 Mar 2009 21:51:10 -0400

All,

I'll address Jim's questions, each in turn:

[Adapters]
Adapters can take a few forms, but let's address three specific scenarios that fan-in to an assessment 
results/presentation step and a few that fan-out.

[Fan in]
Fan in typically comes from three sources: 1) static tools, 2) testing tools, and 3) manual analysis.

Adapters that deal with fan-in have three challenges to surmount:
A) Technically trans-code results from #1, #2, or #3 into a single tool's format for roll-up, or a tool-independent 
alternative
B) Normalize results between #1, #2, and #3 so that 'apples' and 'oranges' get reported rather than 'apples' and 'cars'
C) Code results with organization-specific "whathaveyou"

[Challenge A - Trans-code]
The good news about the tools is that they nearly all export to some XML format you can manipulate. This tackles output 
from #1 and #2. Organizations that adopted any tool early have struggled with keeping up with format changes but have 
indicated to me that they're willing to pay this small price for the ability to 'plug' that many results into their 
developers' bug-tracking systems.

A bigger challenge (effort and therefore cost-wise) is getting manual results (#3) into either the format driven by the 
tool responsible for reporting or that independent format I described. Smart consulting firms have built 'emselves 
workflow to manage this. In the '90's we at Cigital used to pine for the report-generating automation of our 
then-competitor @Stake and waffle about whether our client-base was too disparate and our assessments too varied in 
scope to support a similar trick. I see no reason not to, within an organization, code up something that 
helps--technically--integrate your manual review findings with those found by tools.

[Challenge B - Normalize]
Manual reviewers report to me their discomfort with the rigidity of systems like "The Seven Kingdoms" and its 
competitors. Mind you, I don't dislike these taxonomies, but people get crotchety and might complain if they're forced 
to cram their findings down into a SAST or DAST report. They perceive, it would seem, the vulnerabilities they find to 
be like snowflakes ;-)

An independent format can be successful as well. One can roll their own (for their organization), or lean heavily on 
the community and go with something like Mitre's  CWE. At Cigital, we're involved with CWE and follow-on work but have 
also helped people organize their own. A critical discussion of this trade off is possible, but beyond the scope of 
this email. That leads us into...The cost of normalizing results between #1 and #2 eclipses the technical challenges of 
building the adapter.

[Challenge C - 'Whathaveyou']
I would say this though: if your organization has progressed to the point where it possesses security standards 
(prescriptive or otherwise), they should absolutely be referenced by appropriate findings in the report. This goes for 
both violations of the standards and the applicability of standards that would serve to prevent or mitigate a 
particular risk.

References to policy/standards can be augmented with best practices, if your organization makes that distinction. I've 
seen organizations link to internal/external resources for training and/or further information as well. If you're going 
to try to transition from "the bug hunt" to "building security in", one great way is to provide developers 
immediately-available information as they're making the mistake.

[Fan out]
Now, you have to 'fan out' into support for bug-tracking systems. Most organizations' security groups have at least as 
much battle to fight here as a security consultancy, actually. Why? Because the organization hasn't mandated a single 
development toolkit. The good news here is that while there may be more fan-out than there was fan-in, bug-tracking 
tools were built to be supplied data. Writing this portion of the adapter--a conduit between what normalized findings 
you've compiled and the offending team's bug tracking system is fairly (technically) straightforward.

[Industry Std. 'Schema']
Sean Barnum has done a flotilla of work on this topic with Bob Martin at Mitre. I've copied him and will let them 
comment on this.  Though it can be cumbersome or uninteresting to practitioners, I think the work they're doing is 
important because whether its admitted or not, work on audit, testing, and verification methodologies/standards must 
implicitly take a stand on defining the words you listed (finding, root cause, vuln., etc.). Where such efforts take a 
stand, one's organization can find alignment with their own notions quite challenging. Where it's implicit (or worse, 
ambiguously and poorly defined) you get lots of wasted time as assessors argue with development over semantics, next 
steps, and responsibilities.

Each methodology has its own limitations in this department, resulting from its focus and perspective, IMO. If you look 
at OSSTM, there's a wealth of definition around activities, which really helps those implementing it differentiate what 
techniques they could apply in testing their system. Their template reporting form falls short on defining constructs 
such as root cause and finding and 'speaks' like an auditor's report. This doesn't do the depth and breadth of their 
assessing techniques justice which means, ultimately, adopting it will take a lot of work in the realm of that 
normalization task we treated earlier. NIST's methodology formalized controls even more producing the 800-53 
publication. I need to look at their recent foray into app sec and reconsider ASVS much more closely and for much 
longer to make judgments in this realm. Currently, I've only considered it in the insanely and unfairly narrow context 
of "a set of stuff to look for". I'll follow up with you on this later this week or next.

[Correlating Risk Systems]
Taking your question literally: Risk systems? Most risk management companies wield Powerpoint and Excel, and as such, 
glue is hard to come by--let alone 'open glue'. I don't have much experience with Archer, but their glue is proprietary 
but their suite includes the ability to weave together policy, requirements, findings, and change/bug management. It 
sits outside the MS Office stack, but what little experience I've had with it wasn't necessarily positive  ;-)

I hope this answers your questions... if not, fire more away,
-jOHN

________________________________________
From: Jim Manico [jim at manico.net]

I like where your head is at - great list.

Regarding:

Builds adapters so that bugs are automatically entered in tracking systems

Does the industry have:

1) A standard schema for findings, root causes, vulnerabilities, etc, and
the inter-relation of these key terms (and others?)
2) Standardized API's for allowing different risk systems for correlate this
data?

Or is it, right now, mostly proprietary glue? Curious...

Also, how do you build adaptors so that manual processes are automatically
entered in a tracking system? Are you just talking about content management
ststems to make it easy to manual reviewers to enter data into rosk
mangement software?

Anyhow, I like where your head is at and it definately got me thinking.

 - Jim

----- Original Message -----
From: "Tom Brennan - OWASP" <tomb at owasp.org>
To: "John Steven" <jsteven at cigital.com>; <sc-l-bounces at securecoding.org>;
"Benjamin Tomhave" <list-spam at secureconsulting.net>; "Secure Code
MailingList" <SC-L at securecoding.org>
Sent: Friday, March 20, 2009 10:37 AM
Subject: Re: [SC-L] BSIMM: Confessions of a Software
SecurityAlchemist(informIT)


John Stevens for Cyber Czar!

I have "Elect J.Stevens" bumper stickers printing, I retooled my Free
Kevin sticker press.

Well stated ;) have a great weekend!

-----Original Message-----
From: John Steven <jsteven at cigital.com>

Date: Fri, 20 Mar 2009 14:35:01
To: Benjamin Tomhave<list-spam at secureconsulting.net>; Secure Code
MailingList<SC-L at securecoding.org>
Subject: Re: [SC-L] BSIMM: Confessions of a Software Security Alchemist
(informIT)


Tom, Ben, All,

I thought I'd offer more specifics in an attempt to clarify. I train
people here to argue your position Ben: security vulnerabilities don't
count unless they affect development.   To this end, we've specifically
had success with the following approaches:

[Integrate Assessment Practices]
   [What?]
Wrap the assessment activities (both tool-based and manual techniques) in
a process that:
   * Normalizes findings under a common reporting vocabulary and
demonstrates impact
       * Include SAST, DAST, scanning, manual, out-sourced, & ALL findings
producers in this framework
   * Anchors findings in either a developmental root cause or other
software artifact:
       * Use Case, reqs, design, spec, etc.
   * Builds adaptors so that bugs are automatically entered in tracking
systems
       * Adaptors should include both tool-based and manual findings
   * Calculates impact with an agreed-upon mechanism that rates security
risk with other  factors:
       * Functional release criteria
       * Other non-security non-functional requirements

   [Realistic?]
I believe so. Cigital's more junior consultants work on these very tasks,
and they don't require an early-adopter to fund or agree to them.  There's
plenty of tooling out there to help with the adapters and plenty of
presentations/papers on risk (http://www.riskanalys.is), normalizing
findings ( http://cwe.mitre.org/.) , and assessment methodology
(http://www.cigital.com/papers/download/j15bsi.pdf).

   [Panacea?]
No. I've done research and consulting in functional testing. If you think
security is powerless against development, try spending a few years in a
tester's shoes! Functionality may be king for development and PMs, but
I've found that functional testing has little to no power. While a lack of
features may prevent software from going out the door, very rarely do I
find that functional testing can implement an effective "go/no-go" gate
from their seat in the org. That's why testing efforts seek muscle from
their friend Security (and its distant cousins under quality "Load and
Performance") to stop releases from going out the door.

There's no reason NOT to integrate with testing efforts, reporting, and
groups: we should. There's every reason security should adhere to the same
interface everyone else does with developers (let them produce code and
let them consume bugs)... I think the steps I outlined under 'what' bring
us closer. I enjoyed Guy's book, but I don't think we need to (or can
expect to) flip organizational precedent and behavior on its head to make
progress.

[Steering]
The above scenario  doesn't allow explicitly for two key input/outputs
from the software security ecosystem:


1.  Handling ultra-high-priority issues in real time
2.  Adjusting and evolving to changing threat landscapes

I've long suggested establishing a steering committee for this.

   [What?]
Establish a steering committee on which a software security, dev,
architecture, operations, and corporate risk sit. These folk should manage
the risk-model, scoring, security standards that drive the assessment
verification standard, and the definition of both short-term and
longer-term mitigating strategies. I'd argue that you'd like Industry
representation too. That organization could come in written form (like the
Top N lists) or in the form of consulting or a panel.

When incidents or firefights come into play, absolutely allow them to be
handled out of band (albeit through a documented process), but! Not until
they've been rated with the agreed-upon model.

   [Realistic?]
Yes. I have several clients that use this structure. I speak with
non-clients that do the same. Data gathering for scoring and
prioritization is easy if you've done the steps in the previous section.
The operations guys help you grade the pertinence of your efforts to what
they're seeing 'in the wild' too.

   [Panacea?]
Does a steering committee help you respond with agility to a high-priority
threat in real time? Not explicitly. But, it does help if your
organizational stakeholders already have a working relationship and a
mutual respect.  Also: I think one root cause of the underlying discomfort
(or dislike) with people's perspectives on this thread has been:

"OK, Fine Gary... you don't like Top N lists... So what do you do?"

Well, in my mind... The above answers that question.

[Assessment and Tools]
Do I believe that the normalized findings will emerge only from static
analysis (or any other kind of vulnerability detection tool)? Absolutely
not. People who follow my writing know I expect dramatic(ally high and
low) numbers to be associated with tools. Let's summarize my data.
Organizations can expect:


*   Static analysis tools to account for 15-20% of their total findings,
out of the box
*   An initial false positive rate as high as 75-99% from a static
analysis tool, without tuning
*   Less than 09% code coverage (by even shallow coverage metrics) from
pen-testing tools

Qualitatively, I can tell you that I expect an overwhelming majority of
static analysis results produced in an organization to come from
customization of their adopted product.

Simply: if you base your  world view on only those things a tool (any
tool) produces, you're world view is as narrow as a Neo-con's-and will
prove about as ineffective. The same is true of those who narrow their
scope to the OWASP Top-10 or the SANS Top 25.

[Top N Redux]
Some have left the impression that starting with a Top N list is of no
use. Please  don't think I'm in this camp.  In my last two presentations
I've indicated, "If you're starting from scratch these lists (or lists
intrinsically baked into a tool's capabilities for detection) are a great
place to start." and if you can't afford frequent industry interaction-use
Top N lists as a proxy for it. They're valuable, but like anything, only
to a point.

For me, this discussion will remain circular until we think about it in
terms of measured, iterative organizational improvement. Why? Because when
an organization focuses on getting beyond a "Top N" list it will just
create their own organization-specific "Top N" list :-) If they're smart
though, they'll call it a dash board and vie for a promotion ;-)

From the other side? People building Top N lists know they're not a
panacea, but also know that a lot of organizations simply can't stomach
the kind of emotional investment that bsimm (and the ilk) come with.

This leaves me with the following:

[Conclusions]
Top N lists are neither necessary nor sufficient for organization success
Top N lists are necessary but not sufficient for industry success
Maturity models are neither necessary nor sufficient for organizational
success
Maturity models are necessary but not sufficient for industry success

Always avail yourself of what the industry produces;
Never confine yourself to a single industry artifact dogmatically;
Whatever you consume from industry, improve it by making it your own;
Where-ever your are in your journey, continue to improve iteratively.

[Related Perennial Rabbit Holes] (bonus)
Bugs vs. Flaws: John Steven'06 -
http://www.mail-archive.com/sc-l at securecoding.org/msg00888.html
Security Vs. Quality: Cowan '02 -
http://www.securityfocus.com/archive/98/304766

----
John Steven
Senior Director; Advanced Technology Consulting
Direct: (703) 404-5726 Cell: (703) 727-4034
Key fingerprint = 4772 F7F3 1019 4668 62AD  94B0 AE7F EEF4 62D5 F908

Blog: http://www.cigital.com/justiceleague
Papers: http://www.cigital.com/papers/jsteven

http://www.cigital.com
Software Confidence. Achieved.


On 3/19/09 7:28 PM, "Benjamin Tomhave" <list-spam at secureconsulting.net>
wrote:

Why are we differentiating between "software" and "security" bugs? It
seems to me that all bugs are software bugs, and how quickly they're
tackled is a matter of prioritizing the work based on severity, impact,
and ease of resolution. It seems to me that, while it is problematic
that security testing has been excluded historically, our goal should
not be to establish yet another security-as-bolt-on state, but rather
leapfrog to the desired end-state where QA testing includes security
testing as well as functional testing. In fact, one could even argue
that security testing IS functional testing, but anyway...

If you're going to innovate, you must as well jump the curve*.

-ben

* see Kawasaki "Art of Innovation"
http://blog.guykawasaki.com/2007/06/art_of_innovati.html

Gary McGraw wrote:
Aloha Jim,

I agree that security bugs should not necessarily take precedence
over other bugs.  Most of the initiatives that we observed cycled ALL
security bugs into the standard bug tracking system (most of which
rank bugs by some kind of severity rating).  Many initiatives put
more weight on security bugs...note the term "weight" not "drop
everything and run around only working on security."  See the CMVM
practice activities for more.

The BSIMM helps to measure and then evolve a software security
initiative.  The top N security bugs activity is one of an arsenal of
tools built and used by the SSG to strategically guide the rest of
their software security initiative.  Making this a "top N bugs of any
kind" list might make sense for some organizations, but is not
something we would likely observe by studying the SSG and the
software security initiative.  Perhaps we suffer from the "looking
for the keys under the streetlight" problem.

gem


On 3/19/09 2:31 PM, "Jim Manico" <jim at manico.net> wrote:

The top N lists we observed among the 9 were BUG lists only.  So
that means that in general at least half of the defects were not
being identified on the "most wanted" list using that BSIMM set of
activities.

This sounds very problematic to me. There are many standard software
bugs that are much more critical to the enterprise than just security
bugs. It seems foolhardy to do risk assessment on security bugs only
- I think we need to bring the worlds of software development and
security analysis together more. Divided we (continue to) fail.

And Gary, this is not a critique of just your comment, but of
WebAppSec at large.

- Jim


----- Original Message ----- From: "Gary McGraw" <gem at cigital.com>
To: "Steven M. Christey" <coley at linus.mitre.org> Cc: "Sammy Migues"
<SMigues at cigital.com>; "Michael Cohen" <MCohen at cigital.com>; "Dustin
Sullivan" <dustin.sullivan at informit.com>; "Secure Code Mailing List"
<SC-L at securecoding.org> Sent: Thursday, March 19, 2009 2:50 AM
Subject: Re: [SC-L] BSIMM: Confessions of a Software Security
Alchemist (informIT)


Hi Steve,

Sorry for falling off the thread last night.  Waiting for the posts
to clear was not a great idea.

The top N lists we observed among the 9 were BUG lists only.  So
that means that in general at least half of the defects were not
being identified on the "most wanted" list using that BSIMM set of
activities. You are correct to point out that the "Architecture
Analysis" practice has other activities meant to ferret out those
sorts of flaws.

I asked my guys to work on a flaws collection a while ago, but I
have not seen anything yet.  Canuck?

There is an important difference between your CVE data which is
based on externally observed bugs (imposed on vendors by security
types mostly) and internal bug data determined using static
analysis or internal testing.  I would be very interested to know
whether Microsoft and the CVE consider the same bug #1 on internal
versus external rating systems.  The difference is in the term
"reported for" versus "discovered internally during SDL activity".

gem

http://www.cigital.com/~gem


On 3/18/09 6:14 PM, "Steven M. Christey" <coley at linus.mitre.org>
wrote:



On Wed, 18 Mar 2009, Gary McGraw wrote:

Many of the top N lists we encountered were developed through the
 consistent use of static analysis tools.
Interesting.  Does this mean that their top N lists are less likely
to include design flaws?  (though they would be covered under
various other BSIMM activities).

After looking at millions of lines of code (sometimes
constantly), a ***real*** top N list of bugs emerges for an
organization.  Eradicating number one is an obvious priority.
Training can help.  New number one...lather, rinse, repeat.
I believe this is reflected in public CVE data.  Take a look at the
bugs that are being reported for, say, Microsoft or major Linux
vendors or most any product with a long history, and their current
number 1's are not the same as the number 1's of the past.

- Steve


_______________________________________________ Secure Coding
mailing list (SC-L) SC-L at securecoding.org List information,
subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l List
charter available at - http://www.securecoding.org/list/charter.php
 SC-L is hosted and moderated by KRvW Associates, LLC
(http://www.KRvW.com) as a free, non-commercial service to the
software security community.
_______________________________________________




_______________________________________________ Secure Coding mailing
list (SC-L) SC-L at securecoding.org List information, subscriptions,
etc - http://krvw.com/mailman/listinfo/sc-l List charter available at
- http://www.securecoding.org/list/charter.php SC-L is hosted and
moderated by KRvW Associates, LLC (http://www.KRvW.com) as a free,
non-commercial service to the software security community.
_______________________________________________



--
Benjamin Tomhave, MS, CISSP
falcon at secureconsulting.net
LI: http://www.linkedin.com/in/btomhave
Blog: http://www.secureconsulting.net/
Photos: http://photos.secureconsulting.net/
Web: http://falcon.secureconsulting.net/

[ Random Quote: ]
"Concern for man and his fate must always form the chief interest of all
technical endeavors. Never forget this in the midst of your diagrams and
equations."
Albert Einstein
_______________________________________________
Secure Coding mailing list (SC-L) SC-L at securecoding.org
List information, subscriptions, etc -
http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
_______________________________________________


_______________________________________________
Secure Coding mailing list (SC-L) SC-L at securecoding.org
List information, subscriptions, etc -
http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
_______________________________________________

_______________________________________________
Secure Coding mailing list (SC-L) SC-L at securecoding.org
List information, subscriptions, etc -
http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
_______________________________________________





Current thread: