Hoaxes and social media paper

Welcome to the Web 2.0 incarnation of the Misinformation Superhighway. Did you really think that hoaxing had died out?

Origin of the Specious: the Evolution of Misinformation is a white paper for ESET that’s just come out.

David Harley CITP FBCS CISSP
Small Blue-Green World
ESET Senior Research Fellow

Advertisements

Spamfighting and Hamfighting

This is an article from Virus Bulletin on Hamfighting, July 2006, made available here by kind permission of Virus Bulletin, which holds the copyright. (You can also read it at HTML on the Virus Bulletin site, but for that you need to be a subscriber – registration is free, though.)

It addresses the problem of legitimate mail (‘ham’) misdiagnosed as spam, with particular reference to aggressive filtering by Verizon. I’m putting it up here now because it has particular relevance to a post I’m putting together on Mac Virus. Brief extract from the introduction to the paper:

Complaints in various forums of poor email delivery service from the ISP seemed to be confirmed by claims from Verizon ‘insiders’ that a policy of rejecting mail by IP block resulted in the loss of all mail from large portions of Europe and Asia. This led to a much publicized class action, resulting in a settlement offer from Verizon to compensate customers who lost legitimate mail between October 2004 and May 2005.

I’ll probably be putting up some more papers and articles that aren’t available on my own sites, in the near future, or external links where appropriate.

David Harley CITP FBCS CISSP
ESET Senior Research Fellow

After AMTSO: a paper for EICAR 2012.

This is a paper presented at the 2012 EICAR conference in Lisbon. (I actually presented two: the other one will be posted here in a day or two, or maybe a little longer as I’m travelling right now.) It’s posted here rather than on the ESET resources page for conference papers in accordance with EICAR’s copyright stipulation that EICAR conference papers be posted on personal web sites.

After AMTSO: a funny thing happened on the way to the forum

Here’s the abstract: 

Imagine a world where security product testing is really, really useful.

  • Testers have to prove that they know what they’re doing before anyone is allowed to draw conclusions on their results  in a published review. 
  • Vendors are not able to game the system by submitting samples that their competitors are unlikely to have seen, or to buy their way to the top of the rankings by heavy investment in advertising with the reviewing publication, or by engaging the testing organization for consultancy. 
  • Publishers acknowledge that their responsibility to their readers means that the claims they make for tests they sponsor should be realistic, relative to the resources they are able to put into them. 
  • Vendors don’t try to pressure testers into improving their results by threatening to report them to AMTSO.
  • Testers have found a balance between avoiding being unduly influenced by vendors on one hand and ignoring informed and informative input from vendors on the other. 
  • Vendors don’t waste time they could be spending on enhancing their functionality, on tweaking their engines to perform optimally in unrealistic tests.
  • Reviewers don’t magnify insignificant differences in test performance between products by  camouflaging a tiny sample set by using percentages, suggesting that a product that detects ten out of ten samples is 10% better than a product that only detects nine. 
  • Vendors don’t use tests they know to be unsound to market their products because they happened to score highly.
  • Testers don’t encourage their audiences to think that they know more about validating and classifying malware than vendors.
  • Vendors and testers actually respect each others work.

When I snap your fingers, you will wake out of your trance, and we will consider how we could actually bring about this happy state of affairs. 

For a while, it looked as if AMTSO, the Anti-Malware Testing Standards Organization, might be the key (or at any rate one of the keys), and we will summarize the not inconsiderable difference that AMTSO has made to the testing landscape. However, it’s clear that the organization has no magic wand and a serious credibility problem, so it isn’t going to save the world (or the internet) all on its own. So where do we (the testing and anti-malware communities) go from here? Can we identify the other players in this arena and engage with them usefully and appropriately?

David Harley CITP FBCS CISSP
Small Blue-Green World
ESET Senior Research Fellow

EICAR 2011 Paper

And a big hand, please, for my EICAR 2011 paper!

This is a paper I presented last week at the EICAR conference in Krems, Austria, on “Security Software & Rogue Economics: New Technology or New Marketing?” Here’s the abstract:

A highlight of the 2009 Virus Bulletin Conference was a panel session on “Free AV vs paid-for AV; Rogue AVs”, chaired by Paul Ducklin. As the title indicates, the discussion was clearly divided into two loosely related topics, but it was perhaps the first indication of a dawning awareness that the security industry has a problem that is only now being acknowledged.

Why is it so hard for the general public to distinguish between the legitimate AV marketing model and the rogue marketing approach used by rogue (fake) security software? Is it because the purveyors of rogue services are so fiendishly clever? Is it simply because the public is dumb? Is it, as many journalists would claim, the difficulty of discriminating between “legitimate” and criminal flavours of FUD (Fear, Uncertainty, Doubt)? Is the AV marketing model fundamentally flawed? In any case, the security industry needs to do a better job of explaining its business models in a way that clarifies the differences between real and fake anti-malware, and the way in which marketing models follow product architecture.

This doesn’t just mean declining to mimic rogue AV marketing techniques, bad though they are for the industry and for the consumer: it’s an educational initiative, and it involves educating the business user, the end-user, and the people who market and sell products. A security solution is far more than a scanner: it’s a whole process that ranges from technical research and development, through marketing and sales, to post-sales support. But so is a security threat, and rogue applications involve a wide range of skills: not just the technical range associated with a Stuxnet-like, multi-disciplinary tiger team, but the broad skills ranging from development to search engine optimization, to the psychologies of evaluation and ergonomics, to identity and brand theft, to call centre operations that are hard to tell apart from legitimate support schemes, for the technically unsophisticated customer. A complex problem requires a complex and comprehensive solution, incorporating techniques and technologies that take into account the vulnerabilities inherent in the behaviour of criminals, end-users and even prospective customers, rather than focusing entirely on technologies for the detection of malicious binaries.

This paper contrasts existing malicious and legitimate technology and marketing, but also looks at ways in which holistic integration of multi-layered security packages might truly reduce the impact of the current wave of fake applications and services.

David Harley CITP FBCS CISSP
ESET Senior Research Fellow

Malware Naming, Shape Shifters & Sympathetic Magic

 

This is the paper on malware naming I presented at the 3rd Cybercrime Forensics Education & Training (CFET 2009) Conference in Canterbury on The Game of the Name: Malware Naming, Shape Shifters and Sympathetic Magic.

Here’s the abstract:

Once upon a time, one infection by specific malware looked much like another infection, to an antivirus scanner if not to the naked eye. Even back then, virus naming wasn’t very consistent between vendors, but at least virus encyclopaedias and third-party resources like vgrep made it generally straightforward to map one vendor’s name for a virus to another vendor’s name for the same malware.

In 2009, though, the threat landscape looks very different. Viruses and other replicative malware, while far from extinct, pose a comparatively manageable problem compared to other threats with the single common characteristic of malicious intent. Proof-of-Concept code with sophisticated self-replicating mechanisms is of less interest to today’s malware authors than shape-shifting Trojans that change their appearance frequently to evade detection and are intended to make money for criminals rather than getting adolescent admiration and bragging rights.

Sheer sample glut makes it impossible to categorize and standardize on naming for each and every unique sample out of tens of thousands processed each day.

Detection techniques such as generic signatures, heuristics and sandboxing have also changed the ways in which malware is detected and therefore how it is classified, confounding the old assumptions of a simple one-to-one relationship between a detection label and a malicious program. This presentation will explain how one-to-many, many-to-one, or many-to-many models are at least as likely as the old one-detection-per-variant model, why “Do you detect Win32/UnpleasantVirus.EG?” is such a difficult question to answer, and explain why exact indication is not a pre-requisite for detection and remediation of malware, and actually militates against the most effective use of analysis and development time and resources. But what is the information that the end-user or end-site really needs to know about an incoming threat?

David Harley
Small Blue-Green World
ESET Senior Research Fellow