Secure Coding mailing list archives

SATE


From: ceng at veracode.com (Chris Eng)
Date: Fri, 28 May 2010 18:45:56 -0400

That should have been "made an identical post".


-----Original Message-----
From: Chris Eng 
Sent: Friday, May 28, 2010 6:51 PM
To: 'Jim Manico'; SC-L at securecoding.org
Subject: RE: [SC-L] SATE

Jim,

You made an identical to the WASC list, and Vadim Okun from NIST posted a detailed reply addressing many of your 
statements/accusations.  What is the point of cross-posting the same thing here, without at least incorporating his 
feedback into the discussion?

-chris


-----Original Message-----
From: sc-l-bounces at securecoding.org [mailto:sc-l-bounces at securecoding.org] On Behalf Of Jim Manico
Sent: Thursday, May 27, 2010 5:34 PM
To: SC-L at securecoding.org
Subject: [SC-L] SATE

I feel that NIST made a few errors in the first 2 SATE studies.

After the second round of SATE, the results were never fully released to 
the public - even when NIST agreed to do just that at the inception of 
the contest. I do not understand why SATE censored the final results - I 
feel such censorship hurts the industry.

And even worse, I felt that vendor pressure encouraged NIST to not 
release the final results. If the results (the real deep data, not the 
executive summary that NIST release) were favorable to the tool vendors, 
I bet they would have welcomed the release of the real data. But 
instead, vendor pressure caused NIST to block the release of the final 
data set.

The problems that the data would have revealed is:

1) false positive rates from these tools are overwhelming
2) the work load to triage results from ONE of these tools were man-years
3) by every possible measurement, manual review was more cost effective

Even worse were the methods around the process of this "study". For 
example, all of the Java app's in this "study" contained poor hash 
implementations. But because the tools (none of them) could see this, 
that "finding" was completely ignored. The coverage was limited ONLY to 
injection and data flow problems that tools have a chance of finding. In 
fact, the NIST team chose only a small percentage of the automated 
findings to review, since it would have taken years to review everything 
due to the massive number of false positives. Get the problem here?

I'm discouraged by SATE. I hope some of these problems are addressed in 
the third study.

- Jim
_______________________________________________
Secure Coding mailing list (SC-L) SC-L at securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
Follow KRvW Associates on Twitter at: http://twitter.com/KRvW_Associates
_______________________________________________



Current thread: