Secure Coding mailing list archives

Re: Secured Coding


From: Chris Wysopal <weld () vulnwatch org>
Date: Thu, 12 Feb 2004 20:08:02 +0000


Greenarrow 1 left out the author and source of this commentary:

GUEST COMMENTARY
Secure software: The source of the problem is the solution
Chris Wysopal
05 Feb 2004

http://searchsecurity.techtarget.com/tip/1,289483,sid14_gci948847,00.html

I wish I could have written more because there is so much more to
this topic but guest commentaries are limited in number of words.

There is another commentary that is a response to Andrew Briney's "Secure
coding? Bah!" article

GUEST COMMENTARY
Secure coding? Absolutely!
Mary Ann Davidson, CSO, Oracle

http://searchsecurity.techtarget.com/tip/1,289483,sid14_gci948304,00.html

Cheers,

Chris

On Tue, 10 Feb 2004, Greenarrow 1 wrote:




"The security products industry has created some great defenses for
protecting technology that can be walled off from non-trusted outsiders.
Firewalls, VPNs and strong authentication are mature technologies that work
well to wall off vulnerable software where possible.

But security product defenses fall short when protecting technology that
needs to be exposed to non-trusted (or less trusted) outsiders. These are
your potential customers, current customers, partners and suppliers. Web
applications and e-mail are examples of this type of software and are a
major source of security vulnerabilities.

The class of software that can't rely on network defenses needs to take care
of its own security. The source of the problem needs to be the source of the
solution, the software itself.

Currently the software industry is creating secure software in reactive
mode. Every time you download a patch and update your computer to make it
more secure, you are downloading a correction to a piece of software your
computer runs. The timeline leading up to the correction usually goes like
this:

Vendor ships software with latent security flaw.
Vulnerability researcher discovers the flaw through manual testing and
reports it to the vendor.
A maintenance engineer at the vendor reproduces the flaw and tracks down the
place in the source code where the original programmer made a coding error.
The engineer fixes the problem in the source code, builds a patch and runs a
regression testing suite to make sure the fix didn't break anything else.
The vendor issues a patch and notifies customers.
Attackers develop exploits and compromise vulnerable computers.
Customer downloads patch, potentially runs his/her own test suite and then
deploys the patch on each vulnerable computer.
If there was a way to identify the problem in the source code before the
software shipped to customers, large expenses would be saved by both vendors
and customers. A NIST study, "The Economic Impacts of Inadequate
Infrastructure for Software Testing, 2002," put the cost of fixing a bug in
the field at $30,000 vs. $5,000 during coding. That study only takes into
account the vendor's cost. A much larger cost is borne by software users:
the cost of cleaning up worms, viruses and other intrusions, and keeping
systems patched. For minor vulnerabilities customer costs are in the
millions. For major worm outbreaks the costs can range into the billions.

Luckily we are not doomed to a costly reactive approach. There is a way to
prevent most security flaws during the original production of the software.
It's called secure coding. There are well known classes of coding flaws that
any programmer can easily learn to identify and avoid. Most of it is just
good programming practices such as correctly sizing buffers, checking
function return codes, and using platform security and crypto API's
properly. Most insecure code is simply sloppy code.

Software customers can save time and money by demanding that their vendors
fix flaws up front with secure coding and not subject them to costly and
seemingly endless worm and virus remediation and patching regimens."


Regards,

Greenarrow1
InNetInvestigations-Forensics













Current thread: