Secure Coding mailing list archives

Re: Comparing Scanning Tools


From: brian at fortifysoftware.com (Brian Chess)
Date: Thu, 08 Jun 2006 23:28:50 -0700

Hi Jerry, as one of the creators of the tool you evaluated, I have to admit
I have the urge to comment on your message one line at a time and explain
each way in which the presentation you attended did not adequately explain
what Fortify does or how we do it.  But I don't think the rest of the people
on this list would find that to be a very interesting posting, so instead
I'm going to try to stick to general comments about a few of the subjects
you brought up.


False positives:
Nobody likes dealing with a pile of false positives, and we work hard to
reduce false positives without giving up potentially exploitable
vulnerabilities.

In some sense, this is where security tools get the raw end of the deal.  If
you're performing static analysis in order to find general quality problems,
you can get away with dropping a potential issue on the floor as soon as you
get a hint that your analysis might be off.  You can't do that if you are
really focused on security.  To make matters worse for security tools, when
a quality-focused tool can detect just some small subset of some security
issue, the create labels it a "quality and security" tool.  Ugh.  This
rarely flies with a security team, but sometimes it works on non-security
folks.
 
Compounding the problem is that, when the static analysis tool does point
you at an exploitable vulnerability, it's often not a very memorable
occasion.  It's just a little goof-up in the code, and often the problem is
obvious once the tool points it out.  So you fix it, and life goes on.  If
you aren't acutely aware of how problematic those little goof-ups can be
once some "researcher" announces one of them, it can almost seem like a
non-event.  All of this can make the hour you spent going through reams of
uninteresting results seem more important than the 5 minutes you spent
solving what could have become a major problem, even though exactly the
opposite is true.


Suppression:
A suppression system that relies on line numbers wouldn't work very well.
When it comes to suppression, the biggest choice you've got to make is
whether or not you're going to rely on code annotation.  Code annotation can
work well if you're reviewing your own code, but if you're reviewing someone
else's code and you can't just go adding annotation goo wherever you like,
you can't use it, at least not exclusively.

Instead, the suppression system needs to be able to match up the salient
features of the suppressed issue against the code it is now evaluating.
Salient features should include factors like the names of variables and
functions, the path or paths required to activate the problem, etc.


Customization:
Of course the more knowledge you provide the tool, the better a job it can
do at telling you things you'd like to know.  But in the great majority of
cases that I've seen, little or no customization is required in order to
derive benefit from any of the commercial static analysis tools I've seen.

In the most successful static analysis deployments, the customization
process never ends--people keep coming up with additional properties they'd
like to check.  The static analysis tool becomes a way to share standards
and best practices.

Regards,
Brian





Current thread: