Secure Coding mailing list archives

BSIMM update (informIT)


From: list-spam at secureconsulting.net (Benjamin Tomhave)
Date: Tue, 02 Feb 2010 22:18:54 -0500

<soapbox>While I can't disagree with this based on modern reality, I'm
increasingly hesitant to allow the conversation to bring in risk, since
it's almost complete garbage these days. Nobody really understands it,
nobody really does it very well (especially if we redact out financial
services and insurance - and even then, look what happened to Wall
Street risk models!), and more importantly, it's implemented so shoddily
that there's no real, reasonable way to actually demonstrate risk
remediation/reduction because talking about it means bringing in a whole
other range of discussions ("what is most important to the business?"
and "how are risk levels defined in business terms?" and "what role do
data and systems play in the business strategy?" and "how does data flow
into and out of the environment?" and so on). Anyway... the long-n-short
is this: let's stop fooling ourselves by pretending that risk has
anything to do with these conversations.</soapbox>

I think:
 - yes to prescriptive!
 - yes to legal/regulatory mandates!
 - caution: we need some sort of evolving maturity framework to which
the previous two points can be pegged!

cheers,

-ben

On 2/2/10 4:32 PM, Arian J. Evans wrote:
100% agree with the first half of your response, Kevin. Here's what
people ask and need:


Strategic folks (VP, CxO) most frequently ask:

+ What do I do next? / What should we focus on next? (prescriptive)

+ How do we tell if we are reducing risk? (prescriptive guidance again)

Initially they ask for descriptive information, but once they get
going they need strategic prescriptions.


Tactical folks tend to ask:

+ What should we fix first? (prescriptive)

+ What steps can I take to reduce XSS attack surface by 80%? (yes, a
prescriptive blacklist can work here)


 Implementation level folks ask:

+ What do I do about this specific attack/weakness?

+ How do I make my compensating control (WAF, IPS) block this specific attack?

etc.

BSIMM is probably useful for government agencies, or some large
organizations. But the vast majority of clients I work with don't have
the time or need or ability to take advantage of BSIMM. Nor should
they. They don't need a software security group.

They need a clear-cut tree of prescriptive guidelines that work in a
measurable fashion. I agree and strongly empathize with Gary on many
premises of his article - including that not many folks have metrics,
and tend to have more faith and magic.

But, as should be no surprise, I cateogrically disagree with the
entire concluding paragraph of the article. Sadly it's just more faith
and magic from Gary's end. We all can do better than that.

There are other ways to gather and measure useful metrics easily
without BSIMM. Black Box and Pen Test metrics, and Top(n) List metrics
are metrics, and highly useful metrics. And definitely better than no
metrics.

Pragmatically, I think Ralph Nader fits better than Feynman for this discussion.

Nader's Top(n) lists and Bug Parades earned us many safer-society
(cars, water, etc.) features over the last five decades.

Feynman didn't change much in terms of business SOP.

Good day then,

---
Arian Evans
capitalist marksman. eats animals.



On Tue, Feb 2, 2010 at 9:30 AM, Wall, Kevin <Kevin.Wall at qwest.com> wrote:
On Thu, 28 Jan 2010 10:34:30 -0500, Gary McGraw wrote:

Among other things, David [Rice] and I discussed the difference between
descriptive models like BSIMM and prescriptive models which purport to
tell you what you should do.  I just wrote an article about that for
informIT.  The title is

"Cargo Cult Computer Security: Why we need more description and less
prescription."
http://www.informit.com/articles/article.aspx?p=1562220

First, let me say that I have been the team lead of a small Software
Security Group (specifically, an Application Security team) at a
large telecom company for the past 11 years, so I am writing this from
an SSG practitioner's perspective.

Second, let me say that I appreciate descriptive holistic approaches to
security such as BSIMM and OWASP's OpenSAMM. I think they are much
needed, though seldom heeded.

Which brings me to my third point. In my 11 years of experience working
on this SSG, it is very rare that application development teams are
looking for a _descriptive_ approach. Almost always, they are
looking for a _prescriptive_ one. They want specific solutions
to specific problems, not some general formula to an approach that will
make them more secure. To those application development teams, something
like OWASP's ESAPI is much more valuable than something like BSIMM or
OpenSAMM. In fact, I you confirm that you BSIMM research would indicate that
many companies' SSGs have developed their own proprietary security APIs
for use by their application development teams. Therefore, to that end,
I would not say we need less _prescriptive_ and more _descriptive_
approaches. Both are useful and ideally should go together like hand and
glove. (To that end, I also ask that you overlook some of my somewhat
overzealous ESAPI developer colleagues who in the past made claims that
ESAPI was the greatest thing since sliced beer. While I am an ardent
ESAPI supporter and contributor, I proclaim it will *NOT* solve our pandemic
security issues alone, nor for the record will it solve world hunger. ;-)

I suspect that this apparent dichotomy in our perception of the
usefulness of the prescriptive vs. descriptive approaches is explained
in part by the different audiences with whom we associate. Hang out with
VPs, CSOs, and executive directors and they likely are looking for advice on
an SSDLC or broad direction to cover their specifically identified
security gaps. However, in the trenches--where my team works--they want
specifics. They ask us "How can you help us to eliminate our specific
XSS or CSRF issues?", "Can you provide us with a secure SSO solution
that is compliant with both corporate information security policies and
regulatory compliance?", etc. If our SSG were to hand them something like
BSIMM, they would come away telling their management that we didn't help
them at all.

This brings me to my fourth, and likely most controversial point. Despite
the interesting historical story about Feynman, I question whether BSIMM
is really "scientific" as the BSIMM community claims. I would contend
that we are only fooling ourselves if we claim otherwise. And while
BSIMM is a refreshing approach opposed to the traditional FUD modus
operandi taken by most security vendors hyping their security products,
I would argue that BSIMM is no more scientific than the those
who gather common quality metrics of counting defects/KLOC. Certainly
there is some correlation there, but cause and effect relationships
are far from obvious and seem to have little predictive accuracy.

Sure, BSIMM _looks_ scientific on the outside, but simply collecting
specific quantifiable data alone does not make something a scientific
endeavor.  Yes, it is a start, but we've been collecting quantifiable
data for decades on things like software defects and I would contend
BSIMM is no more scientific than those efforts. Is BSIMM moving in
the right direction? I think so. But BSIMM is no more scientific
than most of the other areas of computer "science".

To study something scientifically goes _beyond_ simply gathering
observable and measurable evidence. Not only does data needs to be
collected, but it also needs to be tested against a hypotheses that offers
a tentative *explanation* of the observed phenomena;
i.e., the hypotheses should offer some predictive value. Furthermore,
the steps of the experiment must be _repeatable_, not just by
those currently involved in the attempted scientific endeavor, but by
*anyone* who would care to repeat the experiment. If the
steps are not repeatable, then any predictive value of the study is lost.

While I am certainly not privy to the exact method used to arrive at the
BSIMM data (I have read through the "BSIMM Begin" survey, but have not
been involved in a full BSIMM assessment), I would contend that the
process is not repeatable to the necessary degree required by science.
In fact, I would claim in most organizations, you could take any group
of BSIMM interviewers and have them question different people in the
organization using the exact same questions and arrive at different results.
In fact, I am willing to bet that the different members of my Application
Security team who have all worked together for about 8 years would
answer a significant number of the BSIMM Begin survey questions quite
differently. (My explanation for this phenomena is the general observation
that if you ask a group of N engineers for their opinion on something,
they will almost certainly arrive at N+1 different opinions. ;-)

I commend the BSIMM sponsors and leadership of releasing BSIMM under
a Creative Commons license, but at the same time, I challenge them
to put forth additional information explaining their data collection
process and in particular, describing how it
avoids unintentional bias. (E.g., Are assessment participants choose at
random? By whom?  How do you know you have a representative sample of
a company? Etc.)

I also challenge BSIMM to show their data collection is repeatable by
others following their process. The published BSIMM Begin survey is a
good start, but information regarding the full assessment seems to be
lacking.

I challenge BSIMM to put forth their hypotheses, plainly stated.
In your InformIT article, you wrote:

   "Another distinct advantage that descriptive models have over
   prescriptive models is the ability to compare current observations
   with past observations."

In my opinion, comparison of observations from two companies is not
worth the paper that is printed on UNLESS we can extrapolate from
this data and make accurate predictions based on past findings. Therefore,
I also would challenge BSIMM to publicly make some specific predictions
using their model and collected data so that their hypotheses can be
tested independently by others.

Finally, while I would like, as you did, to blame our Computer Security /
software security's "Cargo Cult" mentality on "its relative youth as
a field", I believe there is something deeper going on here. For one,
computer science / IT / whatever you want to call this much broader
field has the same issue. And while computer "science" is young as
measured against most other scientific disciplines, it is by no means an
immature field. (As a discipline, it is much older than I, and trust me,
I am no spring chicken. :)

After observing this field for 30+ years (ouch!), I have concluded that we
have not matured into a science because as a discipline we *do NOT
really want to!*  We can't even decide if we want this study of computers
/ information processing / etc. to be a "science", an "engineering
discipline", or a "craft". (And some even would like it to be an "art".)
Most of us--myself included--are too lazy to do the disciplined work
that true science requires, and that includes having enough guts to
challenge the academic culture and *demand* funding to do well-designed
scientific experiment in our discipline at our leading universities. IMO,
if we fail to do this, CS is doomed to always remain a science wannabe.

Some would say that because our broader field of study--whether you
call it Computer Science, Computer Engineering, Information Science,
whatever--is part science, part engineering, part craft, and part
something unique that humanity has never before encountered, attempts
to treat it as a science will not succeed. However, surely this does
not mean that we should not attempt to add some scientific rigor to
it as a discipline. To that end efforts such as BSIMM should be welcomed
by all.  But is also important for those who prefer _descriptive_ approaches
like BSIMM, to acknowledge the importance of _prescriptive_ approaches
such as ESAPI, WAFs, anti-malware software, etc. We truly need *both*
approaches to be successful.

Regards,
-kevin
---
Kevin W. Wall           Qwest Information Technology, Inc.
Kevin.Wall at qwest.com    Phone: 614.215.4788
"It is practically impossible to teach good programming to students
 that have had a prior exposure to BASIC: as potential programmers
 they are mentally mutilated beyond hope of regeneration"
   - Edsger Dijkstra, How do we tell truths that matter?
     http://www.cs.utexas.edu/~EWD/transcriptions/EWD04xx/EWD498.html

This communication is the property of Qwest and may contain confidential or
privileged information. Unauthorized use of this communication is strictly
prohibited and may be unlawful.  If you have received this communication
in error, please immediately notify the sender by reply e-mail and destroy
all copies of the communication and any attachments.

_______________________________________________
Secure Coding mailing list (SC-L) SC-L at securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
_______________________________________________


_______________________________________________
Secure Coding mailing list (SC-L) SC-L at securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
_______________________________________________



-- 
Benjamin Tomhave, MS, CISSP
tomhave at secureconsulting.net
Blog: http://www.secureconsulting.net/
Twitter: http://twitter.com/falconsview
LI: http://www.linkedin.com/in/btomhave

[ Random Quote: ]
"Some of us will do our jobs well and some will not, but we will be
judged by only one thing-the result."
Vince Lombardi


Current thread: