Dailydave mailing list archives

Re: [fuzzing] Coverage and a recent paper by L. Suto


From: "Andre Gironda" <andreg () gmail com>
Date: Mon, 29 Oct 2007 15:24:36 -0400

On 10/28/07, Alexander Sotirov <alex () sotirov net> wrote:
The only way to compare static analysis tools is to use a large sample set of
real vulnerabilties and measure the false positive and false negative rates.
Everything else is a waste of time.

While I agree that binary classifiers (using sensitivity and
specificity measurements) are the best way to compare static analysis
tools (*)  - I can think of at least one other method to measure
improvements over time.

Ryan Gerard and Ramya Venkataramu (both originally from Symantec)
spoke about using community-based reputation systems for measuring
quality aspects of people, tests, testing tools, and even components
of software.  Ryan mentioned attending the GTAC conference in his blog
here - http://searchforquality.blogspot.com/2007/08/gtac.html and you
can read more about it in their presentation
http://ltac.googlegroups.com/web/2007-08-23_Ryan_Ramya.pdf or see it
on Youtube at http://www.youtube.com/watch?v=YCatiB8d100

(*) or fuzzers, web application security scanners, et al

On 10/29/07, J.M. Seitz <lists () bughunter ca> wrote:
I agree wholeheartedly. However, I do think that you can get your automation
inside a QA cycle to the point where you are deep-diving and finding the
not-so-low-hanging fruit. This does require skill, budget and time to
achieve. Internally, I have used some tools under trial as "first pass"
scanners, to see what they found, and to be honest I wasn't overly
impressed. In the automation cycles I have helped develop, you first test to
the spec of the software (does it do what it's supposed to) and secondly you
test the corner cases (robustness). From personal experience, this has
worked wonders, and has measurably reduced the amount of bugs shipped.

Developers can also improve automation through continuous integration,
automated static code analysis security scanners, automated
model-checkers, and build-integrated system tests (e.g. using Canoo
WebTest, Jameleon, staf.sf.net, etc).

Design and code review is difficult to automate, but we can "automate"
processes through workflow tools.  Blueinfy has some free tools that
help with automating aspects of code review, and Atlassian has a
product called Crucible to improve code review workflow.  Atlassian
and ThoughtWorks/OpenQA are great resources for continuous integration
developer testing improvements.

I also think that we are going to start seeing some integration consulting
where software dev firms are going to begin hiring out for not only
asessments on their products, but integrators and tool builders that can
develop highly specific tools (fuzzers, scanners, etc.) that will integrate
directly into their QA/automation cycles. For the shops I have done this
for, I can tell you that there is no way that an out of the box solution
would have found as many bugs as a custom built one.

Speaking directly to fuzzers in the dev/QA automation cycle - I think
there is benefit from this.  However, showing cost benefit is often
difficult.  What I'd like to see is tools such as Microsoft FuzzGuru,
CUTE, and Compuware DevPartner SecurityChecker integrated into as many
builds as possible.  CUTE is open-source and works for Java and C - so
this seems like a no-brainer.  More advanced operations/maintenance
security testing can be done using EFS and/or WI/AppScan/Hailstorm
against weak areas of code coverage (or areas which represent more
complexity) before adversaries do the same.  I wouldn't say that these
solutions are "custom" built in the same way as complete protocol
dissection, reverse engineering, or other generated test scenarios.
However, each of these tools does require some "configuration",
usually by an expert.

Unfortunately, the largest problem we're facing with the above
strategies is the growing rate of code vs. the growing rate of
qualified experts.  Make sure to check out Felix Linder (FX)'s talk
from HITB 2007, available here -
http://conference.hackinthebox.org/hitbsecconf2007kl/materials/D2T1%20-%20Felix%20Lindner%20-%20%20%09%20Attack%20Surface%20of%20Modern%20Applications.pdf

Cheers,
Andre
_______________________________________________
Dailydave mailing list
Dailydave () lists immunitysec com
http://lists.immunitysec.com/mailman/listinfo/dailydave


Current thread: