Secure Coding mailing list archives

FW: How Can You Tell It Is Written Securely?


From: herman.stevens at astyran.be (Herman Stevens)
Date: Mon, 1 Dec 2008 19:54:51 +0100

Hello Marcin,

I agree with your statement that many companies have some requirements in their SLA's with outsourced development 
firms. However, if these are really F-100 businesses they usually have all non-core processes out-sourced (because a 
Big4 company told them that would reduce costs), the relationship management with the outsourced companies is also 
out-sourced (probably to the same Big4). This means after a few years all knowledge has left the company and if a 
Request For Proposal needs to be written (e.g. for a new application supporting their core business functions) this is 
outsourced again to the same Big4 since the company itself does not even have the required knowledge to write its own 
RFPs ...

I really doubt that anything that goes in that RFP (and ultimately in the contracts) will have any _real_ security 
value. 

Using penetration tests and vulnerability requirements might be part of the acceptance process, but I do not call these 
tests _security_ requirements. They are acceptance requirements ...

The original request asked for how can someone determine if an application is written in a secure manner. My reasoning 
is that
this is the wrong question (the application must _be_ secure and for this there is no direct link with coding 
practices). And even if one can proof the application is written in a secure manner, this will not be enough to be 
secure (e.g. about 99% of all security relevant features are nowadays in the configuration, the customer will never 
issue a change request for a new java library of javascript library yet in many of my penetration tests I 'break' the 
application because of old libs, ...).  

I do not think that penetration tests and vulnerability assessments are a 'proof' that an application is written 
securely. I've seen many applications that were written horrendously but were very secure (in the sense that they 
abided to all security-relevant business requirements) and I have seen many applications written using the 'best 
practices' in coding and developed with very mature processes that could be hacked in minutes.

So, are there any studies that proof that a company that performs some tests (e.g. pen-tests) or include security 
requirements in the contracts ultimately is better off than a company that does not do what we consider 'best 
practices'? And if we don't have that proof, shouldn't we be very prudent in what we advise to our customers? 

Please note that my company sells security related software and performs vulnerability assessments, so I'm not saying 
that these are useless (:)), but maybe there are better methods than penetrate & patch or enforcing very heavy 
processes on innocent development teams... So, this is question to this list: Are we on the right track? Is application 
security really improving? Do we measure the correct things and in the correct way? My point of view is that only 
certain vulnerabilities are less common than in the early days just because of more mature frameworks, but not due to 
better processes or after the fact testing. Does this mean all efforts were vain? Or did the threat landscape change? 
And yes, there are many vendor driven statistics floating around but they really cannot be considered unbiased ... Lots 
of questions, maybe not all relevant for the Secure Coding list, but Secure Coding should have an final objective. Or 
not?

Herman 
herman.stevens at astyran.be
-----Original Message-----
From: Marcin Wielgoszewski [mailto:marcinw86 at gmail.com] 
Sent: maandag 1 december 2008 17:06
To: Herman Stevens
Cc: SC-L at securecoding.org
Subject: Re: [SC-L] FW: How Can You Tell It Is Written Securely?

Steven,

There are more than several managers of application security programs
for F-100 companies that have written security requirements into their
SLA's with outsourced development firms.  One example uses application
penetration testing and vulnerability assessment findings to enforce
SLA requirements.  Some companies employ an entire team of people to
perform both whitebox and blackbox testing in addition to
external/3rd-party assessments.

And as you later state, security requirements should be written into
the functional requirements, and not handed off in its own category or
as some appendix document.

-Marcin
tssci-security.com




Current thread: