Secure Coding mailing list archives

Software security definition(s)


From: arian.evans at anachronic.com (Arian J. Evans)
Date: Thu, 13 Mar 2008 02:03:29 -0700

I hate to start a random definition thread, but Ben asked me a good
question and I'm curious if anyone else sees this matter in the
same fashion that I do. Ben asked why I refer to software security
as similar to artifacts identified by emergent behaviors:

 > Software security is an emergent behavior that changes over time
 > with software use, evolution, extension, and the changing threat
 > landscape. It's not an art you get people inspired in.
 >
 You keep using that phrase - "emergent behavior" - and I'm not sure what
 you mean.

So one of the biggest challenges for me was communicating
to any audience, and most importantly the business and
developers, what "secure software/applications" means.

Around 2000/2001 I was still fixated on artifacts in code
and in QA and secure software == strongly typed
variables with draconian input validation and character
set handling (canonicalization and encoding types).

Yet I continued to have problems with the word "security"
when talking to business owners and developers about software.

This is because you say "security" and business owners
see sandbags blocking their figurative river of profits. Corporate
and gov developers see sandbags stopping them from going
home and having dinner with the wife or playing WoW.
Startup developers just laugh.

I started using phrases like "predictable and dependable software"
instead of security. Giving examples of "Rob's Report" -- it has all
these user requirements it must meet to pass on to UAT, and if it
fails, blah blah. SQL injection is a failure of degree, and not of kind.
Same kind of failure as a type-mismatch error that stops the report
from running -- but huge difference in degree of impact.

Finally it dawned on me folks latch on to this secure software stuff
as features and requirements, anyone using waterfall gets drowned
in insecure software due to forever-pushed-back security "features".

My experience also was that never, ever, is a Production app
deployment identical to dev regions, let alone QA stages, UAT, etc.

From a security posture: prod might be better *or* worse than the
other environments.

Yet even worse -- sometimes I'd test app A and app B for a company,
and they would both fair well when tested independently.

I'd come back a year later and the company would have bolted
them together through say some API or WS and now, together,
apps A and B were really weak when glued together. Usually this
was due to interfaces handling I/O that they weren't intended to.

Long and short of it -- it struck me that security is a measure
of behaviors. It is one quality of an application, like speed/
performance, but this quality is measured by the observed
behaviors, regardless of what the source code, binary, or
blueprint tells you...

Note -- I am not saying any of these things are not valuable.
There's things I'd still far rather find in source than black box,
and things binary tracing is brilliant at. I'm simply saying that
at the end of the day, the "proof in the pudding" is at run-time.

The same way we perform the final measure the quality of
commercial jets: I don't care about tensile strength exceeding
tolerance standards if the wings are falling off at run-time.

If someone compromises 1000 customer accounts, steals
their prescription data, and tells the zoloft folks who is
buying prozac so they can direct-market (real world example):
you have a defective application.

Those behaviors are always emergent -- meaning they can
only ultimately be measured at runtime in a given environment
with a given infra and extension (like plugging app B into app
A through some wonky API).

Sometimes it's the *caching* layer that allows you to insert
control characters that lead to compromising the rest of the
application, soup to nuts.

You won't find any hint of the caching layer in source, in binary,
in blueprint, in conversation with the devs, in dev or QA regions,
in staging and UAT, or by running your desktop or network VA
webapp scanner unauthenticated on the entry portal.

You might find it in documentation, or during threat modeling.

You will find it when you start measuring the emergent
behavior of a piece of otherwise well-vetted software
now sitting behind a weak caching layer and you
start observing wonky caching/response issues and
realize these behaviors are weak; you can attack them....

What is "secure" software?

It is one quality of an application that can be measured
by the emergent behaviors of the software while trying to
meet and enforce its use-case in a given run-time environment.

This now fits back to the whole ideology discussion
that is sorely needed and overdue.

-- 
Arian Evans
software security stuff


Current thread: