On the 19th July I took part in an interesting discussion panel about testing anti-virus software. This took place at a media event set up by Kaspersky Lab at The Fairmont hotel in San Francisco.
The panel discussion, entitled "Examining Test Methodologies for Today’s AV", included myself, Jonathan Penn (Vice President Forrester Research), Lysa Myers (Director of Research, West Coast Labs) and Roel Schouwenberg (Senior Anti-Virus Researcher, Kaspersky Lab). The discussion was moderated by Ryan Naraine.
We talked about testing methodologies, what makes a good test (and a bad one!) as well as questioning whether or not journalists should actually run anti-malware tests themselves. My view then and now is that they should, but that they need to be aware of the inherent limitations in running such tests. This applies to any tester, of course, journalist or otherwise.
If an anti-virus product is reviewed in a tiny test and fails to protect against three threats out of a total of 10 then the only safe conclusion one can reach is that it failed to protect against three out of ten. It is not safe to make wider claims such as, "This product is useless," or "This product only protects against 70 per cent of internet threats."
Another point I made was that, while I want everyone to care only about the tests that I perform, truly it is best if there are more tests rather than fewer. Let's say 100 people perform tests of varying quality. As long as the testers are not biased or completely incompetent you would expect to notice a pattern of effectiveness forming around specific products and possibly vendors. The best products should generally come out near the top and the worst should generally appear near the bottom.
There will, as in any comparison of tests, be anomalies, but looking at lots of tests should provide an overall feeling of product effectiveness.
Those wanting to run tests that will stand up to scrutiny should at least read AMTSO's testing guidelines. These are available for free from the documents section of AMTSO's website. If you want to run realistic 'dynamic' tests (such as I do) you'll want to digest the document entitled AMTSO Best Practices for Dynamic Testing.