Malicious Android: why the Birds are Angry

It’s no secret that trojans that misuse premium SMS services are one of the most prevalent problems in the mobile malware arena. However, the flood of “Lagostrod” and “Miriada” so-called free knock-offs of real games are peppered with code that sends text messages to premium services. Mikko Hypponen retweeted an estimate, based on comments to reddit, that the attackers could have made around $12,000,000.

According to Sophos’ Vanja Svajcer:

After more than a day on the market, the applications were pulled off by the Android Market security team. Google’s reaction has been quick, but not quick enough – at least ten thousand users downloaded one of the malicious apps from the list.

Much more information on the event in Vanja’s blog and in Sean’s blog for F-Secure.

I hope to see an apology from Chris DiBona for suggesting that anyone working for an AV company should be ashamed of themselves if they have a product for Android,  Blackberry or iOS, but won’t be holding my breath.

(Yes, this is the sort of stuff I usually post to Mac Virus, but it’s not really Apple-related, I guess, so I think I’ll probably do more of it here.)

David Harley CITP FBCS CISSP
Small Blue-Green World/AVIEN/Mac Virus
ESET Senior Research Fellow

Advertisements

Before you get to the blogs further down…

Welcome! Check out the links on the menu above to find out about Small Blue Green World. This is the gateway to the various blogs and bits and bobs that constitute the SBGW presence on the web.

Essentially, this is a consultancy offering services to the security industry, launched by David Harley in 2006 and with one main customer (ESET), so this particular page isn’t maintained very regularly: it has (currently) no commercial/advertising function, but it includes some papers/resources that may not be available elsewhere. The blogs linked here, however, especially those to which I contribute on ESET’s behalf, are maintained regularly.

The services I provide to ESET are quite wide-ranging, but they include blogging on the ESET blog page. I stopped contributing to SC Magazine’s Cybercrime Corner some time ago, and that page seems to have been removed. I’ll be looking back over my articles for that venue to see which might usefully be republished. Sometime…

I did write fairly regularly for Infosecurity Magazine, primarily on Mac issues, but haven’t done so for a while. Other authoring and editing includes conference papers, white papers and so on.

The ESET Threat Center and We Live Security pages include links to a range of resources. More specifically, the ESET resources page and includes white papers written specifically for ESET, papers for external conferences and workshops submitted on ESET’s behalf, links to articles written for outside publications and sites, again on ESET’s behalf, ESET’s monthly threat reports, for which I often provide articles and editing, while some of my conference presentations are available as slide decks here.

Some articles and conference papers can’t be posted on a commercial site for copyright-related reasons, so I tend to post them on this site instead. When I remember. Specifically, most of that stuff is now posted to Geek Peninsula.

AVIEN (formerly the Anti-Virus Information Exchange Network), which was run as an independent organization by myself and Andrew Lee (and before that by Robert Vibert), is still hosted on its own web site and has its own blog page hosted there, but I’m no longer heavily associated with the organization except as an occasional blogger there. I do maintain (intermittently) a phone scam resources page there.

I run several other specialist security blogs completely independently of ESET, and these include a blog focused on hoaxes, spam, scams and similar nuisances (thanks to ESET N. America CEO and long-time friend and colleague Andrew Lee, you can also access this as http://www.virushoax.co.uk), and another that focuses (mostly) on Apple malware: essentially, it’s the current incarnation of the old Mac Virus web site originally founded by Susan Lesch, and sometimes includes contributions from Old Mac Bloggit, the well-known pseudonym.

We no longer host the AMTSO blog, and  I don’t do any administration on the main AMTSO site any more. I do, however, maintain an independent AV-testing blog/resource called, imaginatively, Anti-Malware Testing, and this archives most of the articles I originally posted on the old AMTSO blog – of course, they do not represent AMTSO’s official views. I also blog occasionally at other sites, include Infosecurity Magazine,  (ISC)2 and Securiteam. I used to flag current articles, papers, blogs and media coverage at The Geek Peninsula (most of this is also tweeted via http://twitter.com/DavidHarleyBlog/) but I was having trouble remembering to update it. I’m now using it as a repository for (most of) my papers, some of my articles, pointers to my current and past blogs, and so on.

If you find any broken links on this site please let us know so we can fix them and please use the contact page to get in touch. Thank you.

David Harley
Small Blue-Green World
ESET Senior Research Fellow

EICAR 2011 Paper

And a big hand, please, for my EICAR 2011 paper!

This is a paper I presented last week at the EICAR conference in Krems, Austria, on “Security Software & Rogue Economics: New Technology or New Marketing?” Here’s the abstract:

A highlight of the 2009 Virus Bulletin Conference was a panel session on “Free AV vs paid-for AV; Rogue AVs”, chaired by Paul Ducklin. As the title indicates, the discussion was clearly divided into two loosely related topics, but it was perhaps the first indication of a dawning awareness that the security industry has a problem that is only now being acknowledged.

Why is it so hard for the general public to distinguish between the legitimate AV marketing model and the rogue marketing approach used by rogue (fake) security software? Is it because the purveyors of rogue services are so fiendishly clever? Is it simply because the public is dumb? Is it, as many journalists would claim, the difficulty of discriminating between “legitimate” and criminal flavours of FUD (Fear, Uncertainty, Doubt)? Is the AV marketing model fundamentally flawed? In any case, the security industry needs to do a better job of explaining its business models in a way that clarifies the differences between real and fake anti-malware, and the way in which marketing models follow product architecture.

This doesn’t just mean declining to mimic rogue AV marketing techniques, bad though they are for the industry and for the consumer: it’s an educational initiative, and it involves educating the business user, the end-user, and the people who market and sell products. A security solution is far more than a scanner: it’s a whole process that ranges from technical research and development, through marketing and sales, to post-sales support. But so is a security threat, and rogue applications involve a wide range of skills: not just the technical range associated with a Stuxnet-like, multi-disciplinary tiger team, but the broad skills ranging from development to search engine optimization, to the psychologies of evaluation and ergonomics, to identity and brand theft, to call centre operations that are hard to tell apart from legitimate support schemes, for the technically unsophisticated customer. A complex problem requires a complex and comprehensive solution, incorporating techniques and technologies that take into account the vulnerabilities inherent in the behaviour of criminals, end-users and even prospective customers, rather than focusing entirely on technologies for the detection of malicious binaries.

This paper contrasts existing malicious and legitimate technology and marketing, but also looks at ways in which holistic integration of multi-layered security packages might truly reduce the impact of the current wave of fake applications and services.

David Harley CITP FBCS CISSP
ESET Senior Research Fellow

EICAR Performance Testing Paper

Back to ESET White Papers

This is a paper called “Real Performance?” written by Ján Vrabec and myself and presented at the 2010 EICAR Conference in Paris in May, available by kind permission of EICAR.

Abstract:

 The methodology and categories used in performance testing of anti-malware products and their impact on the computer remains a contentious area. While there’s plenty of information, some of it actually useful, on detection testing, there is very little on performance testing. Yet, while the issues are different, sound performance testing is at least as challenging, in its own way, as detection testing. Performance testing based on assumptions that ‘one size [or methodology] fits all’, or that reflects an incomplete understanding of the technicalities of performance evaluation, can be as misleading as a badly-implemented detection test. There are now several sources of guidelines on how to test detection, but no authoritative information on how to test performance in the context of anti-malware evaluation. Independent bodies are working on these right now but the current absence of such standards often results in the publication of inaccurate comparative test results. This is because they do not accurately reflect the real needs of the end-user and dwell on irrelevant indicators, resulting in potentially skewed product rankings and conclusions. Thus, the “winner” of these tests is not always the best choice for the user. For example a testing scenario created to evaluate performance of a consumer product, should not be used for benchmarking of server products.

There are, of course, examples of questionable results that have been published where the testing body or tester seem to be unduly influenced by the functionality of a particular vendor. However, there is also scope, as with other forms of testing, to introduce inadvertent bias into a product performance test. There are several benchmarking tools that are intended to evaluate performance of hardware but for testing software as complex as antivirus solutions and their impact on the usability of a system, these simply aren’t precise enough. This is especially likely to cause problems when a single benchmark is used in isolation, and looks at aspects of performance that may cause unfair advantage or disadvantage to specific products.

This paper aims to objectively evaluate the most common performance testing models used in anti-malware testing, such as scanning speed, memory consumption and boot speed, and to help highlight the main potential pitfalls of these testing procedures. We present recommendations on how to test objectively and how to spot a potential bias. In addition, we propose some “best-fit” testing scenarios for determining the most suitable anti-malware product according to the specific type of end user and target audience.

Download – EICAR: Real Performance paper

David Harley 
Security Author/Consultant at Small Blue-Green World
ESET Senior Research Fellow
Mac Virus Administrator

 

Re-Floating the Titanic: Social Engineering Paper

Back to ESET White Papers

Re-Floating the Titanic: Dealing with Social Engineering Attacks is a paper I presented at EICAR in 1998. It hasn’t been available on the web for a while, but as I’ve had occasion to refer to it several times (for instance, http://www.eset.com/blog/2010/04/16/good-password-practice-not-the-golden-globe-award and http://avien.net/blog/?p=484) in the last few days, I figured it was time it went up again. To my surprise, it doesn’t seem to have dated particularly (less so than the abstract suggests).

Abstract follows: 

“Social Engineering” as a concept has moved from the social sciences and into the armouries of cyber-vandals, who have pretty much set the agenda and, arguably, the definitions. Most of the available literature focuses on social engineering in the limited context of password stealing by psychological  subversion. This paper re-examines some common assumptions about what constitutes social engineering, widening the definition of the problem to include other forms of applied psychological manipulation, so as to work towards a holistic solution to a problem that is not generally explicitly recognised as a problem. Classic social engineering techniques and countermeasures are considered, but where previous literature offers piecemeal solutions to a limited range of problems, this paper attempts to extrapolate general principles from particular examples.

It does this by attempting a comprehensive definition of what constitutes social engineering as a security threat, including taxonomies of social engineering techniques and user vulnerabilities. Having formalized the problem, it then moves on to consider how to work towards an effective solution.  making use of realistic, pragmatic policies, and examines ways of implementing them effectively through education and management buy-in.

The inclusion of spam, hoaxes (especially hoax virus alerts) and distribution of some real viruses and Trojan Horses in the context of social engineering is somewhat innovative, and derives from the recognition among some security practitioners of an increase in the range of threats based on psychological manipulation. What’s important here is that educational solutions to these problems not only have a bearing on solutions to other social engineering issues, but also equip computer users to make better and more appropriate use of their systems in terms of general security and safety.

David Harley FBCS CITP CISSP
Security Author/Consultant at Small Blue-Green World
Chief Operations Officer, AVIEN
Chief Cook & Bottle Washer, Mac Virus
ESET Research Fellow & Director of Malware Intelligence

Also blogging at:
http://avien.net/blog
http://www.eset.com/blog
https://smallbluegreenblog.wordpress.com/
http://blogs.securiteam.com
http://blog.isc2.org/
http://dharley.wordpress.com
http://macvirus.com