Secure Coding mailing list archives

Re: The problem is that user management doesn't demand security


From: "George W. Capehart" <gwc () acm org>
Date: Wed, 10 Dec 2003 22:25:45 +0000

On Monday 08 December 2003 10:00 am, David A. Wheeler wrote:

<snip>

Sorry for the delayed response . . . The real world intervened . . . :->


I agree in general with your rant, but I think there's an important
distinction
that isn't sufficiently clear in it.  There are at least TWO KINDS of
management:
software project management, and end-user management.
And that makes all the difference.

I'll see your two kinds of management and raise you one:  corporate 
governance (and as a result, corporate risk management).

I'll also suggest that there are a couple of varieties of end-users:  
those who are company employees or members of institutions *whose use 
is (supposed to be) governed by the organization's security policies*, 
and those who aren't (like Joe Homeuser).


I would argue that the problems you noted - insufficient management
attention
to risk - are serious, but are fundamentally an _END-USER_
management issue.  Managers either don't ask if the products are
sufficiently secure for their needs, or are satisfied
with superficial answers (Common Criteria evaluated is good enough;

I'm not comfortable that this is an end-user issue.  This seems to me to 
be and example of lack of governance . . . and this is *exactly* the 
kind of thing that the certification and accreditation process is for.  
The *whole* purpose of the exercise is to:

 - understand the risks that would follow from the use of a system
 - understand the assumptions that were the basis for selection of 
controls
 - identify the residual risks (those that are left uncontrolled)
 - and, most importantly, have the business owner of the system */sign 
off on accepting those risks/*

That a manager hasn't determined the security of a product or system is 
a failure of corporate risk management.  Everyone in the organization 
from that manager all the way up to and including the Board is culpable 
in that situation.

please don't ask about the evaluation EAL level, or
if the tested functions or environment are related to my needs.
And certainly don't look at its security history, or its development
process,
or any other information that might give me a better understanding of
my risks.  I just want to know if everyone else runs it too, so I can
do "management by lemming").

*wince*  I hear you.  I stand by my last comment . . . :-)


The software development managers for at least most proprietary and
custom programs are doing exactly what they're supposed to do -
they're developing products that users will buy.  The ONLY CRITERIA
that matters for a typical proprietary software product is, "will you
buy it?" If the answer is "yes", then doing anything beyond that is
fudiciarily irresponsible, and potentially dangerous to
their livelihood.

But if the answer is "no, because it is insecure"?

  A security problem might even be
a good thing, that means you might pay for the upgrade that includes
a patch for the known vulnerability (without fixing fundamental
issues - else how can I charge you for the NEXT upgrade?).

Yeah, well, we know no one would do that . . . :>


Since end-user management buys the products, software project
management is doing exactly the right thing for their customers.
There have been software development projects in the past that
worked hard to develop secure products.  End-user management didn't
buy those products.  Therefore, those software development managers
were failures (in the market sense), and other software development
managers have learned that lesson very well.  Many widely-used
products are known to be security nightmares.  But people keep buying
them. Therefore, a software project manager has learned to keep
ignoring security (for the most part)... and the market rewards this
behavior.

And we're getting right back to lack of governance.  The bottom line is 
that there is *very* little to no accountability for security failures 
in corporate America.  *Twice* this year, the ATMs of one of the 
largest banks in the US were brought down by Windows viruses.  Not 
once, but twice.  The first time was by SQL Slammer, the second by 
Nachia.  Security just isn't that important . . .


I want improved security in products.  I don't like this situation
at all.  But it's hard to do get more secuirty in an
environment where there is no reward (worse, an anti-reward) for
better security.  The history of security has shown that customers
repeatedly choose the product that is LESS secure.  At least so far;
I can hope that will change.

Is it a deliberate choice or the absence of the knowledge/motivation to 
exercise a choice at all?


I think examining the economic motivations for why things are
done the way they're done now is critical.  Making it clearer to
end-users how secure a product is would help, but we have some
ways of doing that and I think it could be easily argued they aren't
helping enough.

We know many techniques that can produce more secure software.
But until there's an economic reason to apply them,
it's unwise to assume that they'll be used often.

I'm with you there.


Also, don't place too much faith in the CMM etc. for security.
The CMM by itself doesn't have much to do with security.

I agree that there's not a "have achieved perfect security level" in the 
CMM, but I would suggest that it does have a direct impact on it . . . 
but in the inverse:  The absence of process guarantees the absence of 
security.  It results in slipshod coding practices, slipshod testing, 
slipshod source code control and version control, slipshod or no change 
control processes . . . in short, criminally negligent processes and 
swiss cheese for a delivery.

A good strong SDLC management process does have a direct impact on some 
aspects of operational security.  A change control process that 
formally distinguishes among developent, QA and production environments 
and which enforces static separation of duties goes a very long way in 
protecting the integrity of code once it leaves the development 
environment . . . especially when coupled with a rigorous testing 
process.

I recently did an audit on a very simple Web shopping cart application.  
There was *no* process.  *No* management.  As a result, the code, while 
formally correct and while it actually did more or less work, was 
awful.  There were *no* field edits . . . either in the UI or in the 
business logic.  Wide open to SQL injection, fuzz attacks, any and 
everything.  Passwords were stored in the clear in the database . . . 
and so on.  The customer had expressed concern about system security 
and had had some specific requirements.  Problem was, there was no 
documentation.  There were no formal requirements documents.  The staff 
working on the project had turned over twice and the only 
"documentation" was what little that got transferred in conversations 
among the individuals.  I'm of the opinion that rigorous process *is* 
important to security, even though there might not be a formal 
"security component" to it . . .
  
The SSE-CMM augments the CMM for security
(of resulting products).  However, beware of looking only
at the process. Other factors (like people and the resulting
products) are critically important too.  I've done many SCEs, and
reviewed the CMM and SSE-CMM, and there many important factors they
don't cover.

Absolutely.

In particular, meeting the training requirements of the SSE-CMM has
almost nothing to do with having good people.

Why do developers fail to apply security techniques/technology?
Because that's what users reward them to do.

I'm with you, but I'd rephrase the question slightly:  Why are security 
requirements not part of the requirements that are given to the 
developers?

Because security is not a requirement.

My $0.02.

/g
-- 
George Capehart

BOFH excuse #389:
/dev/clue was linked to /dev/null






Current thread: