funsec mailing list archives

RE: Consumer Reports Slammed for Creating 'Test' Viruses


From: "David Harley" <david.a.harley () gmail com>
Date: Wed, 23 Aug 2006 11:08:17 +0100

http://www.eweek.com/article2/0,1759,2005814,00.asp

Hmm.

* Actually, I always assume that a body that screws up an AV test
is also unreliable on everything else unless proved otherwise.
* The article seems to suggest that the people who disapprove
of creating test viruses are the same people who admire
exploit development. I don't believe this is so. It does seem
to be the case that it's mostly AV researchers who disapprove
of virus creation. You could argue that that's because it exposes
weaknesses in the technology, but some of us who aren't selling
that technology are more concerned about the misleading results from 
weak methodologies. There is an issue as to whether a tester who
doesn't understand the technical issues might also misunderstand
the safety issues.
* The issues of deontological disapproval ("virus creation
is -always- wrong") is a different issue. Whether you hold to that
view or a more utilitarian/pragmatic/situational view doesn't necessarily 
say anything about your competence to test (1) validly (2) safely.
So there are actually 3 arguments against, not two.
* I don't get the impression that 5,500 viruses were created. What
does seem to have been generated is 5,500 instances of presumed viruses
from one or more kits. That immediately casts doubts about the competence
of the test. (There are plenty of other reasons for doubting it, but
I've already gone over them elsewhere.)
* -One- person suggested retrospective testing?? It's hardly a new
suggestion. There have already been several competent tests using that
method. Rob and I wrote about it in VR in 2000, and I doubt if we invented
it.
I've already admitted to being warmer to it now than I was then. ;-) 
* Retrospective testing works at least as well as virus creation. Actually,
creating a -valid- test set of new viruses is a costlier, tougher job than
sound retrospective testing (and that's tough enough).
* I don't think retrospective testing is necessarily invalidated by the time
lag. Testing is always a snapshot of a product's abilities at the time of
the test, and accuracy has already slipped by the time of publication.
That's
why comparatives -have- to be cyclic to be even mildly useful. 
* Agreed, you can't assume that because performance was X% detected Y months

ago, the same will apply next month. But you can't assume that with a test 
using new viruses - or non-viruses - either. You can only say that on such 
a date a product detected a certain proportion of your viruses. That may not

actually tell you -anything- about its performance against malware you
didn't 
write. At least with retrospective testing, you can hypothesise that 
some future malware will be along the same lines as what already exists 
(that, after all, is the assumption behind heuristic analysis!)

-- 
David Harley
Security Author & Consultant
Small Blue-Green World
dharley () smallblue-greenworld co uk



_______________________________________________
Fun and Misc security discussion for OT posts.
https://linuxbox.org/cgi-bin/mailman/listinfo/funsec
Note: funsec is a public and open mailing list.


Current thread: