funsec mailing list archives

RE: Consumer Reports Slammed for Creating 'Test' Viruses


From: Larry Seltzer <Larry () larryseltzer com>
Date: Wed, 23 Aug 2006 06:41:18 -0400

http://www.eweek.com/article2/0,1759,2005814,00.asp

(Hi there, I wrote the column. Please click on the ads in it.)

The article seems to suggest that the people who disapprove of
creating test viruses are the same people who admire exploit
development. 

I don't think I made that argument specifically, I argued that exploit
development is widesread and relatively uncontroversial, at least when
done by "responsible" organizations like eEye. Many of these exploits
become public quickly. CR was criticized because viruses (<yawn!> and,
yes, variants of viruses) that they created for internal testing might
somehow magically jump outside their storage to the real world. So the
point was that concern for the artificial viruses is exaggerated,
especially when compared to real-world dangers.

I don't get the impression that 5,500 viruses were created. What does
seem to have been generated is 5,500 instances of presumed viruses from
one or more kits. That immediately casts doubts about the competence of
the test. (There are plenty of other reasons for doubting it, but I've
already gone over them elsewhere.)

We really don't have the data, do we? That is a problem, although how
valid the testing is depends somewhat on how well the products did in
detecting the variants. This whole conversation is somewhat hypothetical
on that account. Assume they actually created 5500 viruses.

-One- person suggested retrospective testing?? It's hardly a new
suggestion. There have already been several competent tests using that
method. Rob and I wrote about it in VR in 2000, and I doubt if we
invented it.
I've already admitted to being warmer to it now than I was then. ;-)

Obviously it wasn't new to me either, since I then immediately write
about how I had done it myself in the past.

you can't assume that with a test using new viruses - or non-viruses -
either. You can only say that on such a date a product detected a
certain proportion of your viruses. That may not actually tell you
-anything- about its performance against malware you didn't write. At
least with retrospective testing, you can hypothesise that some future
malware will be along the same lines as what already exists (that, after
all, is the assumption behind heuristic analysis!)

I agree entirely and basically said so in the column: there are
trade-offs involved. 

Larry Seltzer
eWEEK.com Security Center Editor
http://security.eweek.com/
http://blog.eweek.com/blogs/larry%5Fseltzer/
Contributing Editor, PC Magazine
larryseltzer () ziffdavis com 

_______________________________________________
Fun and Misc security discussion for OT posts.
https://linuxbox.org/cgi-bin/mailman/listinfo/funsec
Note: funsec is a public and open mailing list.


Current thread: