EICAR 2011 Paper

And a big hand, please, for my EICAR 2011 paper!

This is a paper I presented last week at the EICAR conference in Krems, Austria, on “Security Software & Rogue Economics: New Technology or New Marketing?” Here’s the abstract:

A highlight of the 2009 Virus Bulletin Conference was a panel session on “Free AV vs paid-for AV; Rogue AVs”, chaired by Paul Ducklin. As the title indicates, the discussion was clearly divided into two loosely related topics, but it was perhaps the first indication of a dawning awareness that the security industry has a problem that is only now being acknowledged.

Why is it so hard for the general public to distinguish between the legitimate AV marketing model and the rogue marketing approach used by rogue (fake) security software? Is it because the purveyors of rogue services are so fiendishly clever? Is it simply because the public is dumb? Is it, as many journalists would claim, the difficulty of discriminating between “legitimate” and criminal flavours of FUD (Fear, Uncertainty, Doubt)? Is the AV marketing model fundamentally flawed? In any case, the security industry needs to do a better job of explaining its business models in a way that clarifies the differences between real and fake anti-malware, and the way in which marketing models follow product architecture.

This doesn’t just mean declining to mimic rogue AV marketing techniques, bad though they are for the industry and for the consumer: it’s an educational initiative, and it involves educating the business user, the end-user, and the people who market and sell products. A security solution is far more than a scanner: it’s a whole process that ranges from technical research and development, through marketing and sales, to post-sales support. But so is a security threat, and rogue applications involve a wide range of skills: not just the technical range associated with a Stuxnet-like, multi-disciplinary tiger team, but the broad skills ranging from development to search engine optimization, to the psychologies of evaluation and ergonomics, to identity and brand theft, to call centre operations that are hard to tell apart from legitimate support schemes, for the technically unsophisticated customer. A complex problem requires a complex and comprehensive solution, incorporating techniques and technologies that take into account the vulnerabilities inherent in the behaviour of criminals, end-users and even prospective customers, rather than focusing entirely on technologies for the detection of malicious binaries.

This paper contrasts existing malicious and legitimate technology and marketing, but also looks at ways in which holistic integration of multi-layered security packages might truly reduce the impact of the current wave of fake applications and services.

David Harley CITP FBCS CISSP
ESET Senior Research Fellow

Advertisements

Re-Floating the Titanic: Social Engineering Paper

Back to ESET White Papers

Re-Floating the Titanic: Dealing with Social Engineering Attacks is a paper I presented at EICAR in 1998. It hasn’t been available on the web for a while, but as I’ve had occasion to refer to it several times (for instance, http://www.eset.com/blog/2010/04/16/good-password-practice-not-the-golden-globe-award and http://avien.net/blog/?p=484) in the last few days, I figured it was time it went up again. To my surprise, it doesn’t seem to have dated particularly (less so than the abstract suggests).

Abstract follows: 

“Social Engineering” as a concept has moved from the social sciences and into the armouries of cyber-vandals, who have pretty much set the agenda and, arguably, the definitions. Most of the available literature focuses on social engineering in the limited context of password stealing by psychological  subversion. This paper re-examines some common assumptions about what constitutes social engineering, widening the definition of the problem to include other forms of applied psychological manipulation, so as to work towards a holistic solution to a problem that is not generally explicitly recognised as a problem. Classic social engineering techniques and countermeasures are considered, but where previous literature offers piecemeal solutions to a limited range of problems, this paper attempts to extrapolate general principles from particular examples.

It does this by attempting a comprehensive definition of what constitutes social engineering as a security threat, including taxonomies of social engineering techniques and user vulnerabilities. Having formalized the problem, it then moves on to consider how to work towards an effective solution.  making use of realistic, pragmatic policies, and examines ways of implementing them effectively through education and management buy-in.

The inclusion of spam, hoaxes (especially hoax virus alerts) and distribution of some real viruses and Trojan Horses in the context of social engineering is somewhat innovative, and derives from the recognition among some security practitioners of an increase in the range of threats based on psychological manipulation. What’s important here is that educational solutions to these problems not only have a bearing on solutions to other social engineering issues, but also equip computer users to make better and more appropriate use of their systems in terms of general security and safety.

David Harley FBCS CITP CISSP
Security Author/Consultant at Small Blue-Green World
Chief Operations Officer, AVIEN
Chief Cook & Bottle Washer, Mac Virus
ESET Research Fellow & Director of Malware Intelligence

Also blogging at:
http://avien.net/blog
http://www.eset.com/blog
https://smallbluegreenblog.wordpress.com/
http://blogs.securiteam.com
http://blog.isc2.org/
http://dharley.wordpress.com
http://macvirus.com

Malware Naming, Shape Shifters & Sympathetic Magic

 

This is the paper on malware naming I presented at the 3rd Cybercrime Forensics Education & Training (CFET 2009) Conference in Canterbury on The Game of the Name: Malware Naming, Shape Shifters and Sympathetic Magic.

Here’s the abstract:

Once upon a time, one infection by specific malware looked much like another infection, to an antivirus scanner if not to the naked eye. Even back then, virus naming wasn’t very consistent between vendors, but at least virus encyclopaedias and third-party resources like vgrep made it generally straightforward to map one vendor’s name for a virus to another vendor’s name for the same malware.

In 2009, though, the threat landscape looks very different. Viruses and other replicative malware, while far from extinct, pose a comparatively manageable problem compared to other threats with the single common characteristic of malicious intent. Proof-of-Concept code with sophisticated self-replicating mechanisms is of less interest to today’s malware authors than shape-shifting Trojans that change their appearance frequently to evade detection and are intended to make money for criminals rather than getting adolescent admiration and bragging rights.

Sheer sample glut makes it impossible to categorize and standardize on naming for each and every unique sample out of tens of thousands processed each day.

Detection techniques such as generic signatures, heuristics and sandboxing have also changed the ways in which malware is detected and therefore how it is classified, confounding the old assumptions of a simple one-to-one relationship between a detection label and a malicious program. This presentation will explain how one-to-many, many-to-one, or many-to-many models are at least as likely as the old one-detection-per-variant model, why “Do you detect Win32/UnpleasantVirus.EG?” is such a difficult question to answer, and explain why exact indication is not a pre-requisite for detection and remediation of malware, and actually militates against the most effective use of analysis and development time and resources. But what is the information that the end-user or end-site really needs to know about an incoming threat?

David Harley
Small Blue-Green World
ESET Senior Research Fellow

Execution Context in Anti-Malware Testing

This is one of my 2009 papers, presented by Randy Abrams and myself on behalf of ESET at the EICAR 2009 Conference in Berlin.

Abstract

Anti-malware testing methodology remains a contentious area because many testers are insufficiently aware of the complexities of malware and anti-malware technology. This results in the frequent publication of comparative test results that are misleading and often totally invalid because they don’t accurately reflect the detection capability of the products under test. Because many tests are based purely on static testing, where products are tested by using them to scan presumed infected objects passively, those products that use more proactive techniques such as active heuristics, emulation and sandboxing are frequently disadvantaged in such tests, even assuming that sample sets are correctly validated.

Recent examples of misleading published statistical data include the ranking of anti-malware products according to reports returned by multi-scanner sample submission sites, even though the better examples of such sites are clear that this is not an appropriate use of their services, and the use of similar reports to generate other statistical data such as the assumed prevalence of specific malware. These problems, especially when combined with other testing problem areas such as accurate sample validation and classification, introduce major statistical anomalies.

In this paper, it is proposed to review the most common mainstream anti-malware detection techniques (search strings and simple signatures, generic signatures, passive heuristics, active heuristics and behaviour analysis) in the context of anti-malware testing for purposes of single product testing, comparative detection testing, and generation of prevalence and global detection data. Specifically, issues around static and dynamic testing will be examined. Issues with additional impact, such as sample classification and false positives, will be considered – not only false identification of innocent applications as malware, but also contentious classification issues such as (1) the trapping of samples, especially corrupted or truncated honeypot and honeynet samples intended maliciously but unable to pose a direct threat to target systems (2) use of such criteria as packing and obfuscation status as a primary heuristic for the identification of malware.

EICAR execution context paper

Phish Phodder: Is User Education Helping or Hindering?

[Go back to ESET White Papers page.]
[Go back to ESET blog.]

David Harley & Andrew Lee, “Phish Phodder: Is User Education Helping or Hindering?” (davidharleyandrewleevb2007), September 2007, Virus Bulletin. Copyright is held by Virus Bulletin Ltd, but the document is made available on this site for personal use free of charge by permission of Virus Bulletin.

ABSTRACT
Mostly, security professionals can spot a phish a mile off. If they do err, it’s usually on the side of caution, for instance when real organizations fail to observe best practice and generate phish-like marketing messages. Many sites are now addressing the problem with phishing quizzes, intended to teach the everyday user to distinguish phish from phowl (sorry). Academic papers on why people fall for phishing mails and sites are something of a growth industry. Yet phishing attacks continue to increase, and while accurate and up-to-date figures for financial loss are hard to come by, indications are that losses from phishing and other forms of identity theft continue to climb.

This paper:
1. Evaluates current research on how end users are susceptible to phishing attacks and ID theft.
2. Evaluates a range of web-based educational and informational resources in general and summarizes the pros and cons of the quiz approach in particular.
3. Reviews the shared responsibility of phished institutions and phishing mail targets for reducing the impact of phishing scams. What constitutes best practice for finance-related mail-outs and e-commerce transactions? How far can we rely on detection technology?