Secure Coding mailing list archives

Positive impact of an SSG


From: brian at fortify.com (Brian Chess)
Date: Wed, 11 Mar 2009 14:29:10 -0400

Ben, Pravir,
Skepticism is healthy, but recognize that:

1) These criticisms would not be possible if we hadn?t explained how BSIMM
was created and offered up the origins of the data.  If we simply said
?here, we think this will probably work?, we could have created a much more
elegant model, and it would have been easy to brush all of the ?how did you
validate it?? questions under the rug with lines like ?Trust us, we?re the
experts!?

2) We published our data set.  If you don?t like the model we created, you
can use our data to create your own.  If you don?t think we came at this the
right way, you can conduct a better experiment, publish your data, and
demonstrate that ours is inferior.  I hope that happens.  It?s how progress
occurs in many other disciplines.

So then, is software security a solved problem?  Of course not!  There?s
plenty left to be done, and the landscape will be different next year.  We
will have new dilemmas and we?ll be working under tighter tolerances.  We
will need a constant stream of new and unproven ideas to try out and report
back on.  So BSIMM isn?t game over, but in moving from ?no supporting
evidence? to ?based on the data?, we?ve raised the bar.

Ben, thanks for the DNS digging.

Brian

On 3/11/09 1:32 PM, "Benjamin Tomhave" <list-spam at secureconsulting.net>
wrote:

I think the celebration is a bit premature, for many of the reasons
Pravir just covered. I think that perhaps the problem we're having here
is that you've not really tested your results, nor have you iterated
through a 2nd time to reevaluate the working theory. If you were
approaching this scientifically, I think the process would look like
this (http://en.wikipedia.org/wiki/Scientific_method):
    1. Use your experience: Consider the problem and try to make sense
of it. Look for previous explanations. If this is a new problem to you,
then move to step 2.
    2. Form a conjecture: When nothing else is yet known, try to state
an explanation, to someone else, or to your notebook.
    3. Deduce a prediction from that explanation: If you assume 2 is
true, what consequences follow?
    4. Test : Look for the opposite of each consequence in order to
disprove 2. It is a logical error to seek 3 directly as proof of 2. This
error is called affirming the consequent.

I think what your missing, then, is at least step 4 as well as
reiterating through the process again (and possibly Step 3). It's a bit
abstract, perhaps, to rigidly apply the scientific method to this
program, but I think it's instructive to consider how you might do this.

BTW, your citation of the xkcd strip on causation vs correlation is
actually instructive here. You've developed a model based on correlation
without demonstrating causation at all. Not to get abstruse, but I don't
see that your model is properly supported or validated. In the end,
ironically, it seems to come down more to expert theories than empirical
evidence. A handful of experts studied 9 organizations and correlated
"highly successful" with "110 practices", but without properly defining
success, without generalizing the model to all types of organizations
(or without defining the scope), and without testing/validating the model.

The good news is that you can now test the model. The bad news is that
you ("you" being the collective behind BSI-MM) probably should have
tested the model first before jumping straight to fanfare and hoopla. :)

In the end, I'm sure that BSI-MM will be a fine model, though the
questions will then be "can I implement it?" and "will it have
sustainable value on its own?" If the value of the model rests on
Cigital and Fortify pushing it into organizations by force (much as the
Big N, ISACA, and ITGI have tried to do with CObIT and valIT), then I
submit that it will encounter problems. It needs to be able to stand on
its own, properly validated, with inherent value through logical
implementation.

Which perhaps begs a question: is BSI-MM intended as an implementation
model to achieve better security in software development, or is it a
measurement tool for evaluating the current security maturity of
software development? A maturity model is typically used to measure
maturity, which means that someone has to then come along and provide
guidance on how to implement a program that can reach an optimal
measurement. (and mayhaps this would be a good time to get together with
Pravir to see if there's a way that you could both have winning game plans)

BTW, when you get to the point of defining success, I would suggest
looking at FAIR (since they lean toward quantitative vs qualitative risk
assessment based on Bayesian statistics) as well as looking at the
concept of "risk resiliency" advocated, in particular, by BT. fwiw.

Anyway...

On whether the site is up or not, I think DNS is hosed for the domain...
I tried it from three locations (separate regions, separate providers)
and got the same results:

$ host bsi-mm.com
Host bsi-mm.com not found: 3(NXDOMAIN)

$ host bsi-mm.com
;; connection timed out; no servers could be reached

freeproxy.us also times out...

cheers,

-ben

Brian Chess wrote:
Ben!  Thank you!  When you talk about sample size, it gives me hope that
we?re on the right track.  We can either:

1) Use ideas that ?experts? theorize will work
or
2) Gather empirical evidence to judge one idea against another.

We in the security crowd often try to hide behind the need for secrecy,
and that?s pushed us toward relying almost entirely on people who have
nothing but rhetoric and personal reputation to stand on.  BSIMM pretty
well shows that, in 2009, we can do better.  It?s a big step forward to
collect data and then argue about what it means.  I know it?s already
made the rounds, but let?s have some XKCD to celebrate:
    http://xkcd.com/552/

I think your question about defining success is an important one.  We
were loose about it in this first round, and I hope it?s something we
can tighten up in our follow-on work.  Here?s my thinking as of today:
software security is not the goal, it?s one of the many things an
organization needs to do in order to meet it?s objectives.  We need to
look at how a software security initiative (or lack thereof) effects the
organization?s ability to meet it?s objectives.  This is going to be
messy, but it?s either that or go back to making stuff up.

BTW, I checked the BSIMM web site after I read your message.  It worked
for me.  Try this?
    http://www.downforeveryoneorjustme.com/bsi-mm.com

Brian

On 3/11/09 10:48 AM, "Benjamin Tomhave" <list-spam at secureconsulting.net>
wrote:

    I think it's an interesting leap of faith. Statistically speaking, 9 is
    a very small sample size. Thus, the proposed model will be viewed
    skeptically until it is validated with a much larger and more diverse
    sample. Putting it another way, there's no way I can take this to a
    small or medium sized org and have them see immediate relevance,
because
    their first reaction is going to be "those are 9 large orgs with lots
of
    resources - we don't have that luxury."

    You quoted "we can say with confidence that these activities are
    commonly found in highly successful programs" - how do you define a
    "highly successful program"? What's the rule or metric? Is this a rule
    or metric that can be genericized easily to all development teams?

    My concern is exactly what you speculate about... organizations are
    going to look at this and either try to tackle everything (and fail) or
    decide there's too much to tackle (and quit). In my experience working
    with maturity models as a consultant, very few people actually
    understand the concept. Folks are far more tuned-in to a PCI-like
    prescriptive method. Ironically, the PCI folks say the same thing you
    did - that it's not meant to be prescriptive, that it's supposed to be
    based on risk management practices - yet look how it's used.

    Maybe you've addressed this, but it doesn't sound like it. I'd perhaps
    be better educated here if the web site wasn't down... ;)

    -ben

    Sammy Migues wrote:
    > Hi Pravir,
    >
    > Thanks for clarifying what you're positing. I'm not sure how we
could
    > have been more clear in the BSIMM text accompanying the exposition
of
    > the collective activities about the need to take this information
and
    > work it into your own culture (i.e., do "risk management"). As a few
    > examples:
    >
    > p. 3: "BSIMM is meant as a guide for building and evolving a >>>
software
    > security initiative. As you will see when you familiarize yourself
    > with the BSIMM activities, instilling software security into an
    > organization takes careful planning and always involves broad
    > organizational change. By clearly noting objectives and goals and by
    > tracking practices with metrics tailored to your own initiative, you
    > can methodically build software security in to your organization?s
    > software development practices."
    >
    > p. 47: "Choosing which of the 110 BSIMM activities to adopt and in
    > what order can be a challenge. We suggest creating a software
    > security initiative strategy and plan by focusing on goals and
    > objectives first and letting the activities select themselves.
    > Creating a timeline for rollout is often very useful. Of course
    > learning from experience is also a good strategy."
    >
    > p. 47: "Of the 110 possible activities in BSIMM, there are ten
    > activities that all of the nine programs we studied carry out.
Though
    > we can?t directly conclude that these ten activities are necessary
    > for all software security initiatives, we can say with confidence
    > that these activities are commonly found in highly successful
    > programs. This suggests that if you are working on an initiative of
    > your own, you should consider these ten activities particularly
    > carefully (not to mention the other 100)."
    >
    > p. 48: "The chart below shows how many of the nine organizations we
    > studied have adopted various activities. Though you can use this as
a
    > rough ?weighting? of activities by prevalence, a software security
    > initiative plan is best approached through goals and objectives."
    >
    > Your words (...BSIMM fails...) imply (to me) that you posit
    > organizations attempting to use the collected wisdom in BSIMM will,
    > inexplicably, look at it and say "Okay, we have to do all 110 of
    > these things exactly as written, so let's get started" without
regard
    > to their local need. This is as opposed to, say, looking at it and
    > thinking "Here's what nine companies have spent dozens of
    > person-decades and millions of dollars learning about what works;
    > let's see what we can glean from that." Uhmmmm, okay.
    >
    > Yes, previous models exist. Although it may have come up in
    > conversation, we did not ask any of the nine something like "What
    > model did you start with back in the beginning?" because it simply
    > isn't relevant to what we're trying to accomplish (documenting what
    > successful organizations are doing), just as "could" and "should"
    > aren't relevant. We asked "What *are* you doing now?" and documented
    > it so others could learn from it.
    >
    > --Sammy.
    >
    > -----Original Message----- From: Pravir Chandra
    > [mailto:chandra at list.org] Sent: Wednesday, March 11, 2009 4:00 AM
To:
    > Sammy Migues; sc-l-bounces at securecoding.org; sc-l at securecoding.org
    > Subject: Re: [SC-L] Positive impact of an SSG
    >
    > Yes, I don't think its exclusive to your BSIMM interviews that you
    > found when people put controls into place, they saw improvement.
    > That's what I (and I'm sure many other consultanting firms) have
been
    > doing for years based upon previous models (CLASP, MS SDL, etc.).
    > Nothing to do with BSIMM per se (actually, most of what DTCC started
    > doing was based on CLASP), just that they added controls 'early into
    > software development lifecycle' and saw benefit, which isn't
    > surprising.
    >
    > That being said, the important part we're missing as 'software
    > security guys' isn't the specification of all the possible things
    > that an organization *could* do, but rather what a given
organization
    > *should* do based on good business decisions around risk management.
    > And that's the crux of what BSIMM fails to do. By basing the current
    > maturity model on the collected practices of 9 massive firms that
    > spend the most on that problem, anyone (aside from firms in a
similar
    > situation to your 9) that attempts to apply it to their organization
    > effectively throws risk management decisions out the window and
    > commits to a much more costly solution than they could have created
    > based on the knowledge of their own business needs since all the
    > practices are based solely on the behaviors of the select few firms
    > you interviewed. I'm not discounting the validity of the empirical
    > data, I'm just positting that it isn't scientifically valid for
    > solving the problem at hand.
    >
    > I'm interested to hear what you learn when you get to the small and
    > medium sized businesses as well as firms using agile development
    > models (something I particularly considered and accounted for with
    > SAMM).
    >
    > Regardless of whether we agree on the percentage of orgs for which a
    > dedicated SSG isn't cost effective, I'm sure we can agree that
    > affording 'someone in charge of success' doesn't equate to a
    > dedicated SSG. There's a myriad of ways that can be accomplished in
    > any organizational structure.
    >
    > Thanks!
    >
    > p.
    >
    > ~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~ ~~~~~~~~ ~~~~~ ~~~ ~~ ~ Pravir
    > Chandra                      chandra<at>list<dot>org PGP:    CE60
    > 0E10 9207 7290 06EB   5107 4032 63FC 338E 16E4 ~ ~~ ~~~ ~~~~~
    > ~~~~~~~~ ~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~
    >
    > -----Original Message----- From: Sammy Migues <SMigues at cigital.com>
    >
    > Date: Tue, 10 Mar 2009 23:15:39 To:
    > sc-l at securecoding.org<sc-l
    <sc-l at securecoding.org<sc-l>@securecoding.org> Subject: Re: [SC-L]
    > Positive impact of an SSG
    >
    >
    > Hi Pravir,
    >
    > Yes, I agree completely: the data gathered in the BSIMM interviews
    > seem to indicate that "the controls over all" led to what the
    > interviewees saw as improvements in their capability to produce
    > secure software.
    >
    > In the nine companies interviewed, those controls (BSIMM activities,
    > I think) sprang from well established SSGs -- that is, a specific
    > person or persons with the responsibility for ensuring lots (110,
    > collectively) of activities actually get done.
    >
    > The BSIMM data to date from specific large organizations indicate
    > that a little under 100:1 is the average ratio for dev/QA to SSG
    > size. It'll be interesting to see how this changes when we get to
    > interviewing smaller organizations and we see if and how they're
    > actually getting it done.
    >
    > Personally, I don't believe I agree with your guess that 95% of
    > organizations building code can't afford an SSG. I believe every
    > organization that wants to succeed can afford to have someone in
    > charge of success, but that's just my opinion and isn't relevant to
    > BSIMM.
    >
    > Cheers,
    >
    > --Sammy.
    >
    >
    > -----Original Message----- From: Pravir Chandra
    > [mailto:chandra at list.org] Sent: Tuesday, March 10, 2009 6:31 PM To:
    > Sammy Migues Cc: sc-l at securecoding.org Subject: Re: [SC-L] Positive
    > impact of an SSG
    >
    > Hey Sammy.
    >
    > How does that pertain to a software security group (SSG) per se? The
    > details below seem to indicate that it was the controls over all
that
    >  lead to the positive impact.
    >
    > My main point is that supporting an SSG isn't cost effective for 95%
    > of the organizations out there that are building code. That's why in
    > SAMM, we didn't mandate the structure of the organization and
instead
    >  concentrated on the functions fulfilled by security guys
(regardless
    >  of their placement in the org).
    >
    > p.
    >
    > On Tue, Mar 10, 2009 at 7:48 AM, Sammy Migues <SMigues at cigital.com>
    > wrote:
    >> Hi all,
    >>
    >> I've received some private questions about the 110 activities in
    >> BSIMM (bsi-mm.com). Since we built the model directly from the
data
    >> gathered, each activity is actually being done in one of the nine
    >> organizations interviewed. The question is whether there's any
    >> evidence the activities are actually effective as opposed to
simply
    >> being done.
    >>
    >> Since we can't publish any private data, I'd like to point folks
at
    >> this recent article in Information Security Magazine. Jim Routh,
    >> CISO of DTCC (one of the nine organizations interviewed), is
quoted
    >> as follows relative to the impact of software security group
    >> activities:
    >>
    >>
    
http://searchsecurity.techtarget.com/magazineFeature/0,296894,sid14_gci134697
4,00.html
    >>
    >>
    >> "One of Routh's big wins is inserting security controls early into
    >> software development lifecycle at the DTCC. Vulnerabilities are
    >> weeded out well before they appear in functional code that ends up
    >> in production and that has resulted in close to $2 million in
    >> productivity gains on a base of $150 million spend for
development,
    >> Routh says.
    >>
    >> "Those gains are exclusively the result of having mature and
    >> effective controls within our system and software development
    >> lifecycle," Routh says. This is a three-year-old initiative that
    >> educates and certifies developers in all DTCC environments in
    >> security. Developers are also provided with the necessary
    >> code-scanning tools and consulting and services help to keep
    >> production code close to pristine."
    >>
    >> --Sammy.
    >>
    >> Sammy Migues Principal, Technology 703.404.5830 -
    >> http://www.cigital.com Software confidence. Achieved.
    >> smigues at cigital.com
    >>
    >>
    >>
    >> _______________________________________________ Secure Coding
    >> mailing list (SC-L) SC-L at securecoding.org List information,
    >> subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l List
    >> charter available at -
http://www.securecoding.org/list/charter.php
    >>  SC-L is hosted and moderated by KRvW Associates, LLC
    >> (http://www.KRvW.com) as a free, non-commercial service to the
    >> software security community.
    >> _______________________________________________
    >>
    >
    >
    >

    --
    Benjamin Tomhave, MS, CISSP
    falcon at secureconsulting.net
    LI: http://www.linkedin.com/in/btomhave
    Blog: http://www.secureconsulting.net/
    Photos: http://photos.secureconsulting.net/
    Web: http://falcon.secureconsulting.net/

    [ Random Quote: ]
    "Don't you wish there were a knob on the TV to turn up the
intelligence?
    There's one marked 'Brightness,' but it doesn't work."
    Gallagher
    _______________________________________________
    Secure Coding mailing list (SC-L) SC-L at securecoding.org
    List information, subscriptions, etc -
    http://krvw.com/mailman/listinfo/sc-l
    List charter available at -
http://www.securecoding.org/list/charter.php
    SC-L is hosted and moderated by KRvW Associates, LLC
    (http://www.KRvW.com)
    as a free, non-commercial service to the software security community.
    _______________________________________________


--
Benjamin Tomhave, MS, CISSP
falcon at secureconsulting.net
LI: http://www.linkedin.com/in/btomhave
Blog: http://www.secureconsulting.net/
Photos: http://photos.secureconsulting.net/
Web: http://falcon.secureconsulting.net/

[ Random Quote: ]
"Why don't they make the whole plane out of that black box stuff."
Steven Wright


-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://krvw.com/pipermail/sc-l/attachments/20090311/856f3ffb/attachment-0001.html 


Current thread: