Secure Coding mailing list archives

[WEB SECURITY] Re: What do you like better Web penetration testing or static code analysis?


From: arian.evans at anachronic.com (Arian J. Evans)
Date: Tue, 27 Apr 2010 14:02:22 -0700

So - Just to make sure I understand - You are saying you don't
actually perform all of these activities for clients to help them
secure their web software today?

I think that will be a relief to some on the list. I know a few people
called me concerned that they were never going to have time to sleep
again with all that to do!

Overall you do make interesting points with your ideas. I definitely
agree with your assertion that automation alone has significant
limitations.

This is definitely the right forum to bounce around your ideas about
what types of "security/secure/coding/analysis" activities may work,
what activities we might want to try out, and what the best books to
read are, to help us figure out how to secure the bazillions of web
applications that exist today.

Ciao,

---
Arian Evans



On Tue, Apr 27, 2010 at 12:52 PM, Andre Gironda <andreg at gmail.com> wrote:
On Tue, Apr 27, 2010 at 11:52 AM, Arian J. Evans
<arian.evans at anachronic.com> wrote:
So to be clear -

You are saying that you do all of the below when you are analyzing
hundreds to thousands of websites to help your customers identify
weaknesses that hackers could exploit?

How do you find the time?

Not me personally, but the industry as a whole does provide most of
these types of coverage. Everyone sees it a different way, and
probably everybody is right.

What I do find incorrect and wrong is assuming that you can automate
anything (especially risk and vulnerability decisions -- is this a
vuln; is this a risk).

What I also find wrong is that the tools which attempt to automate
finding vulnerabilities and assigning risk (but can't deliver on
either) cost $60k/assessor for a code scan or $2k/app for a runtime
scan.

A "team" (doesn't have to be security people, but should probably
include at least one) should instead use a free tool such as
Netsparker Community Edition, crawl a target app through a few proxies
(a few crawl runs) such as Burp Suite Professional, Casaba Watcher,
and Google Ratproxy -- do a few other things such as track actions a
browser would take (in XPath expression format) and plot a graph of
dynamic-page/HTML/CSS/JS/SWF/form/parameter/etc objects (to show the
complexity of the target app) -- and provide a data corpus (not just a
database or list of findings) to allow the reviewers to make more
informed decisions about what has been testing, what should be tested,
and what will provide the most value. Combine the results with
FindBugs, CAT.NET, VS PREfix/PREfast, cppcheck, or other static
analysis-like reports in order to generate more value in making
informed decisions. Perhaps cross-correlate and maps URLS to source
code.

I call this "Peripheral Security Testing". Then, if time allows (or
the risk to the target app is seen as great enough), add in a
threat-model and allow the "team" to perform penetration-testing based
on those informed decisions. I call this latter part, "Adversarial
Security Testing".

Does manual testing take more time, or does it instead find the right
things and allow the "team" to make almost-fully informed decisions,
thus saving time? I will leave that as a straw man argument that you
can all debate.

dre



Current thread: