Dailydave mailing list archives

Re: Static Analysis part 5


From: Andy Steingruebl <steingra () gmail com>
Date: Wed, 29 Jul 2009 10:54:04 -0700

On Tue, Jul 28, 2009 at 7:07 AM, Rafal M. Los<rafal () ishackingyou com> wrote:


   I think it's even more complex than that.  The whole problem with static
analysis is the impact of the false-positive.  We take for granted that
Static Analysis generally makes piles of false-positives through analysis
(either you're too strict, or too loose... either way is bad) and what that
does goes deeper than just make that particular set of results questionable.
Infosec generally has a hard enough time convincing developers (this is from
experience, not anything else) that they have issues... but then dropping a
300-page report wherein you have thousands of "possible" issues (whether you
use a probability scale or not) means that it's going to degrade further the
*reputation* that the InfoSec folks have with the rest of the development
organization.  Vetting false-positives often requires a developer (or at
least a compiled app to test with) meaning that there is a much deeper
involvement to "get at the truth".

Isn't this the problem with a lot of security or general testing?
When you do a whole bunch of fuzzing and get a whole lot of program
crashes, don't you still have to go and figure out whether they are
exploitable?  And don't you still end up with missing things because
your data isn't making it all the way to the vulnerable code?  Witness
Michael Howard's explanation of why they didn't catch the ATL issues -
their fuzzer wasn't specific enough to get to that code path.  I don't
see a lot of the results of static analysis being that difference.

The difference being that some people seem to expect or believe that
static analysis tells you more than it does.

 Then there's the issue of customer-supplied sanitization and validation
routines.  Tainted data is one thing but how does a static analysis engine
determine that data that passes through Method_X(), for example, doesn't
have the right level of sanitization or validation?  You simply can't
account for all the creative ways that developers can come up with to scrub
var's, and it gets worse...

I don't think anyone claims that static analysis actually does this
perfectly.  Neither does code review or dynamic testing.  Dynamic
testing tells you whether the filtering or sanitization worked for the
inputs you chose, not that it works for all inputs, right?

- Andy
_______________________________________________
Dailydave mailing list
Dailydave () lists immunitysec com
http://lists.immunitysec.com/mailman/listinfo/dailydave


Current thread: