Secure Coding mailing list archives

Re: The problem is that user management doesn't demand security


From: "Dana Epp" <dana () vulscan com>
Date: Mon, 08 Dec 2003 23:08:39 +0000

Why do developers fail to apply security techniques/technology?
Because that's what users reward them to do.

David,

I am not sure its so easy to paint a broad stroke like this. The fundamental
problem here is that the foundation of software development has been riddled
with holes when it comes to secure coding practices. I want to separate the
use of security technologies in an application from that of secure coding
principles. In this day and age it should be unacceptable for us to see the
same attack vectors coming from vulnerabilities exposed by the likes of
over/underflows which have been around for decades. It has been commonly
accepted, (and rewarded as you have pointed out), which gets us to the
predicament that we are now in as it relates to common operating systems and
application software that are to brittle to stand up for themselves. If
years ago the principles of secure coding were applied as a foundation of
learning we would just start to see the results in any sort of positive
manner. As an example I am going to bet that Microsoft's code freeze and
application of secure coding principles here on out in their own code base
won't have any real impact on our computing environments till after Longhorn
has been released. (Although I have to admit the attack surface of Windows
Server 2003 is much lower than previous versions of their products because
of the application of least privilege and least use... but that's a far cry
from secure coding).

Why is this the case? Because developers and their managers haven't known
any better. During their education they were not taught about this. And they
don't think about this because they are required to get feature XYZ in place
immediately. (To get those rewards you speak of) The fact most developers
don't even know how to perform actions like thread modelling as part of
their initial design phase (hell most don't even design proper functional
and technical design specs) shows that this isn't getting much better. And I
know I am preaching to the choir here. Your own secure programming how-to
talks about why insecure code is written. But that doesn't make it right.

Coming back from this tangent, I want to address the idea that fundamental
risk assessment is a USER management issue. I respectfully disagree. The
risks are far DIFFERENT between the two realms, but they both have their own
associated risks to deal with. (Which is why I think threat modelling is an
important part of the security management lifecycle of any software
project). We need to assume that the end user does not understand the risks,
and our products must therefore be even stronger to mitigate threats to the
computing platform. Not only do we need to make our products more resilient
in the face of hostile environments such as the Internet, we need to assume
we are going to be the last defence. We no longer can simply ship because it
works and a customer will pay for it... we need to ship when we know it
works, and doesn't expose our customers to more risk that cannot be
mitigated. From a financial analysis point of view that is extremely
difficult (especially for start-ups), as the vendor needs to make money to
survive. Yet I think it is wrong to assume that simply because "end-user
management buys the products, software project management is doing exactly
the right thing for their customers.". If we know better, and still ship the
product, we are only compounding the problem further.

There is no real solution here until vendors and users start becoming
accountable for their actions. Bad implementation and sloppy administration
will always trump secure design, but that doesn't mean we shouldn't still
make it secure. I think its irresponsible to ship products that have KNOWN
vulnerabilities without first associating that risk with appropriate
safeguards for the end user. We wouldn't accept buying an unsafe car from
someone like Ford, so why should we accept it in the field of software
development? Economics towards the vendor is no longer a good enough reason.
Studies are showing the significant impact and cost that bugs have in the
different stages of design, development and testing and are astronomical as
they have to be applied to the customer. It is much to costly to both the
vendor and the customer to routinely patch weak designs in the field. We
need to fix it while still in the design and development phase. And that
MUST include secure programming as a fundamental component of design. It is
way to inefficient to bolt on a security solution after the fact.... as the
product will be to brittle, and may actually be weakened by poorly hacked
together solutions.

Why do developers fail to apply security techniques/technology?
Because they don't know any better. Their environment hasn't demanded that
they do. And we need to change that.

---
Regards,
Dana M. Epp
[Blog: http://silverstr.ufies.org/blog/]


----- Original Message ----- 
From: "David A. Wheeler" <[EMAIL PROTECTED]>
To: "George Capehart" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Monday, December 08, 2003 7:00 AM
Subject: [SC-L] The problem is that user management doesn't demand security


George Capehart wrote:

<rant>
Developers are not the people to be making the risk management decisions
to which I *think* you are referring.  Decisions about how to manage
risks that affect the confidentiality, integrity and availability of
systems and data are *business* decisions.  The organization's risk
management process is responsible for providing the information that
the organization's leadership needs in order to make these decisions.
(Information Security) Policies and procedures and decisions about what
controls to put into place and whether to accept, react to, or prevent
threats from occurring are management (business) decisions.  This is
what the risk management process [0] and the certification and
accreditation process is all about.[1]  The outcome of those decisions
become *requirements* for the SDLC.  The developer doesn't have any say
in that.  Management should be held accountable for lack of risk
management, but *rarely* is . . .


I agree in general with your rant, but I think there's an important
distinction
that isn't sufficiently clear in it.  There are at least TWO KINDS of
management:
software project management, and end-user management.
And that makes all the difference.

I would argue that the problems you noted - insufficient management
attention
to risk - are serious, but are fundamentally an _END-USER_
management issue.  Managers either don't ask if the products are
sufficiently secure for their needs, or are satisfied
with superficial answers (Common Criteria evaluated is good enough;
please don't ask about the evaluation EAL level, or
if the tested functions or environment are related to my needs.
And certainly don't look at its security history, or its development
process,
or any other information that might give me a better understanding of my
risks.  I just want to know if everyone else runs it too, so I can do
"management by lemming").

The software development managers for at least most proprietary and
custom programs are doing exactly what they're supposed to do - they're
developing products that users will buy.  The ONLY CRITERIA that
matters for a typical proprietary software product is, "will you buy it?"
If the answer is "yes", then doing anything beyond that is
fudiciarily irresponsible, and potentially dangerous to
their livelihood.  A security problem might even be
a good thing, that means you might pay for the upgrade that includes
a patch for the known vulnerability (without fixing fundamental issues -
else how can I charge you for the NEXT upgrade?).

Since end-user management buys the products, software project management
is doing exactly the right thing for their customers.
There have been software development projects in the past that
worked hard to develop secure products.  End-user management didn't
buy those products.  Therefore, those software development managers
were failures (in the market sense), and other software development
managers
have learned that lesson very well.  Many widely-used products are
known to be security nightmares.  But people keep buying them.
Therefore, a software project manager has learned to keep ignoring
security (for the most part)... and the market rewards this behavior.

I want improved security in products.  I don't like this situation
at all.  But it's hard to do get more secuirty in an
environment where there is no reward (worse, an anti-reward) for
better security.  The history of security has shown that customers
repeatedly choose the product that is LESS secure.  At least so far;
I can hope that will change.

I think examining the economic motivations for why things are
done the way they're done now is critical.  Making it clearer to
end-users how secure a product is would help, but we have some
ways of doing that and I think it could be easily argued they aren't
helping enough.

We know many techniques that can produce more secure software.
But until there's an economic reason to apply them,
it's unwise to assume that they'll be used often.

Also, don't place too much faith in the CMM etc. for security.
The CMM by itself doesn't have much to do with security.
The SSE-CMM augments the CMM for security
(of resulting products).  However, beware of looking only
at the process. Other factors (like people and the resulting products)
are critically important too.  I've done many SCEs, and reviewed the CMM
and
SSE-CMM, and there many important factors they don't cover.
In particular, meeting the training requirements of the SSE-CMM has almost
nothing to do with having good people.

Why do developers fail to apply security techniques/technology?
Because that's what users reward them to do.

--- David A. Wheeler <[EMAIL PROTECTED]>














Current thread: