Secure Coding mailing list archives

SATE


From: jim at manico.net (Jim Manico)
Date: Thu, 27 May 2010 14:33:35 -0700

I feel that NIST made a few errors in the first 2 SATE studies.

After the second round of SATE, the results were never fully released to 
the public - even when NIST agreed to do just that at the inception of 
the contest. I do not understand why SATE censored the final results - I 
feel such censorship hurts the industry.

And even worse, I felt that vendor pressure encouraged NIST to not 
release the final results. If the results (the real deep data, not the 
executive summary that NIST release) were favorable to the tool vendors, 
I bet they would have welcomed the release of the real data. But 
instead, vendor pressure caused NIST to block the release of the final 
data set.

The problems that the data would have revealed is:

1) false positive rates from these tools are overwhelming
2) the work load to triage results from ONE of these tools were man-years
3) by every possible measurement, manual review was more cost effective

Even worse were the methods around the process of this "study". For 
example, all of the Java app's in this "study" contained poor hash 
implementations. But because the tools (none of them) could see this, 
that "finding" was completely ignored. The coverage was limited ONLY to 
injection and data flow problems that tools have a chance of finding. In 
fact, the NIST team chose only a small percentage of the automated 
findings to review, since it would have taken years to review everything 
due to the massive number of false positives. Get the problem here?

I'm discouraged by SATE. I hope some of these problems are addressed in 
the third study.

- Jim


Current thread: