Secure Coding mailing list archives

Silver Bullet turns 2: Mary Ann Davidson


From: arian.evans at anachronic.com (Arian J. Evans)
Date: Fri, 4 Apr 2008 17:39:56 -0700

Mary -- Thank you for your reply and clarification.

I am 100% on board with you about folks inventing
taxonomies and then telling business owners and
developers what artifacts they need to look for,
measure, etc. without any real cost or business
justification with regards to to your costs vs. return
to track/measure in the first place, let alone *why*
to fix them.

I lumped a few things together which I shouldn't have...

Defect decision making: Most organizations involved
in the care and feeding of business software, whether or
not they produced it, are still stuck at the instance level of
"do I fix this vulnerabilitly/asset now or later"?

You are at an entirely different level as a large producer.

As one of the largest ISVs -- you likely have very different
needs and perspectives. e.g.- you might be able to bear
the cost of deeper software defect analysis than most
software consumer organizations, due to the huge cost
of regression testing and/or the support costs of post-hotfix
deployments if they break things.

The above is just a guess on my part, but we all know that
hotfixing running production database servers is not the
same game as implementing output encoding on a web
application search field. </cost></effort></risk>

Anyway, long and short, like Andrew and others -- those
of us in the pragmatic "good enough" & "daily increments"
security camps would love for you to share more of your
experiences with this, Oracle's SDL journey, etc. and
add that to our growing pool of what we know about this.

A lot of what I mentioned we need to gather/answer is
really more for the consumers of your software, writing
their custom code on top of it, and then trying to operate
in a reasonably safe manner while making money.

As an ISV you have a slightly different operating model
and bottom line, and you probably are afforded less
ability to use temporary band-aids or mitigation steps
versus flat-out fixing your software. Not everyone has
to "fix" all their software, or make it self-defending.

Which is ironic considering I used to present on the
notion of "self-defending web applications', and help
people implement SDLs. But I've become less of a
purist now that I'm trying to help people secure
dozens to hundreds of applications at once, with
limited time and budget.

It's hard not to feel like a charlatan when you can't
give the business folks hard metrics on defect
implications, failure costs, let alone just the cost
of measurement vs. potential return.

Thank you for doing Gary's interview, and the
stimulating thoughts.

Have a good weekend all. Come see me at
RSA if any of you SC-L'ers are around. I'll be
putting people to sleep with a talk on encoding,
transcoding, and cannonicalization issues
in our software (and WAFs :).
(At RSA. Lol.)

Ciao

-ae




On Fri, Apr 4, 2008 at 2:50 PM, mary.ann davidson
<mary.ann.davidson at oracle.com> wrote:

 Hi Arian

 Thanks for your comments.

 Just to clarify, I was not trying to look at this at the micro level of
should we "fix buffer overflows or fix SQL injections?" We (collectively)
now have pretty good tools that can find exploitable defects in software and
the answer is, if it is exploitable you fix it, and if it is just poor
coding practice (but not exploitable) you still probably should fix it,
albeit maybe not with same urgency as "exploitable defect" and you may only
fix it going forward.

 My issue is people who invent 50,000 idealogically pure development
practices, or artifacts or metrics that someone might want you to produce
(often, somoene who is an academic or a think tank person), and never look
at "Ok, what does it cost me to capture that information? Will being able to
"measure" X or create a metric help me actually manage better?  If I can't
have a theologically perfect development process (and who does?), then what
are the best things to do to actually improve?" The perfect is really the
enemy of the good or the enemy of the "better."

 I like a lot of your suggestions after  "We really need to know...". I do
realize that as you close one attack vector, persistent enemies will try
something new. But one of the reasons I do feel strongly about "get the
dreck out of the code base" is that, all things being equal, forcing
attackers to work harder is a good thing. Also, reducing customers' "cost of
security operations" (through higher security-worthy, more resilient code)
is a good thing because resources are always constrained, and the resources
people spend on patching, and/or random third party "add security"
appliances and software takes scarce resources that might be put to better
uses.

 If the Army has tank crews of 12, and 10 of them are busy fixing the tank
treads because they keep slipping off, they aren't going to be too ready to
fight an actual battle.

 Regards -

 Mary Ann



 Arian J. Evans wrote:

 I'll second this Gary. You've done nice work here.

I think Mary Ann's comments are some of the most
interesting concerning what our industry needs to
focus on in the near future. (and I'd love to see you
focus on this with your series)

Her comments reminded me of a discussion on this
list with Wysopal a year or so ago.

Wysopal described defect detection in manufacturing,
and gave software security analogues. This resonates
with me as I'm fond of using analogues to airplane
manufacturing, construction, and testing to explain
what a software SDL should look like, and where/why
whitebox, blackbox, and static analysis could and
should fit in a mature SDL model. And how you
evaluate/prioritize assets and costs.

Important to this discussion is the vastly different
degrees of defect detection we perform on human
transport planes versus, say model airplanes,
where the threat profile is (obviously) reduced.

Yet in software, security folks at many organizations
have a very hard time deciding which planes have
more valuable payloads. What defect to address?

The buffer overflow on the Tandems (that no one
knows how to write shellcode for)? The SQL injection
on the public content DB? The non-persistent XSS
buried deep in the application? HTTP control-character
injection in the caching proxy? Prioritization difficulty...

Mary Ann is asking for metrics, including cost, and
some measure of threat profiles from which to
calculate or estimate the risk presented by defects.

Ultimately I think we all want some sort of priority
weighting by which to make better decisions about
which defects are important, what our executive
next-actions are, what to filter, etc.

But this can't come from software analysis. This
must come from organizational measurement data.

The problem is, nobody in our industry has good
data on what those metrics are. Many of us have
little slices, but there is no universal repository
for sharing all of that information (which is needed).

The software-security marketing industry has relied
as a crutch on software-defect cost-analysis studies
done by large accounting consulting firms that have
a vested interest in puffing up the "discovered" costs,
so that they can sell multi-million-dollar, multi-year,
software defect-reduction projects to their clients
that are reading those reports.

I don't think those reports have much to do with
our industry. (We're all aware of the "we reduced
1 bug per 10 lines of code saving the customer
millions" games those consultancies play, right?)

We really need to know:

Threat:
+ What are the attackers going after?
+ What is being attacked and how often?
+ What is being lost?


Attack Surface:
+ What attacks are successful and how often?
+ What are the most common attacks?
+ What are the most common/successful targets?

Then we need to map this back into software
design patterns and implementation practices,
and generate some metrics around costs to:

+ hotfix & support
+ release-fix and regression test
+ redesign and re-implement

In the manufacturing world, the people who
analyze this data are the actuaries or the
government regulatory agencies. We have
neither insurance or regulation to drive this.

We also need to see if it really costs more
to fix code after-the-fact, versus avoidance
up front. I suspect this up-front avoidance is
still cheaper with unmanaged code products,
and maybe with all design issues.

In the web software world, though, I think this
is not always the case.

(I suspect in the web world, especially with
modern frameworks, it is just as cheap
and easy to refactor/hotifx post-production
software as it is to catch defects before final
implementation or shipping code)

Anyway -- there's a lot that has to happen
to provide Mary with the data she wants
(and indeed any smart CSO in an ISV
should be asking for).

In fact -- it's amazing we've improved so
far, so fast, without even knowing what
the tensile strength of our materials is,
or having a concrete notion of what the
cost of failure is in a final product.

So where will this come from?

Insurance?

Regulation?

Industry self-policing project?

Is someone here working on this today?

Cheers. Ciao





Current thread: