Before you get to the blogs further down…

Welcome! Check out the links on the menu above to find out about Small Blue Green World. This is the gateway to the various blogs and bits and bobs that constitute the SBGW presence on the web.

Essentially, this is a consultancy offering services to the security industry, launched by David Harley in 2006 and with one main customer (ESET), so this particular page isn’t maintained very regularly: it has (currently) no commercial/advertising function, but it includes some papers/resources that may not be available elsewhere. The blogs linked here, however, especially those to which I contribute on ESET’s behalf, are maintained regularly.

The services I provide to ESET are quite wide-ranging, but they include blogging on the ESET blog page. I stopped contributing to SC Magazine’s Cybercrime Corner some time ago, and that page seems to have been removed. I’ll be looking back over my articles for that venue to see which might usefully be republished. Sometime…

I did write fairly regularly for Infosecurity Magazine, primarily on Mac issues, but haven’t done so for a while. Other authoring and editing includes conference papers, white papers and so on.

The ESET Threat Center and We Live Security pages include links to a range of resources. More specifically, the ESET resources page and includes white papers written specifically for ESET, papers for external conferences and workshops submitted on ESET’s behalf, links to articles written for outside publications and sites, again on ESET’s behalf, ESET’s monthly threat reports, for which I often provide articles and editing, while some of my conference presentations are available as slide decks here.

Some articles and conference papers can’t be posted on a commercial site for copyright-related reasons, so I tend to post them on this site instead. When I remember. Specifically, most of that stuff is now posted to Geek Peninsula.

AVIEN (formerly the Anti-Virus Information Exchange Network), which was run as an independent organization by myself and Andrew Lee (and before that by Robert Vibert), is still hosted on its own web site and has its own blog page hosted there, but I’m no longer heavily associated with the organization except as an occasional blogger there. I do maintain (intermittently) a phone scam resources page there.

I run several other specialist security blogs completely independently of ESET, and these include a blog focused on hoaxes, spam, scams and similar nuisances (thanks to ESET N. America CEO and long-time friend and colleague Andrew Lee, you can also access this as http://www.virushoax.co.uk), and another that focuses (mostly) on Apple malware: essentially, it’s the current incarnation of the old Mac Virus web site originally founded by Susan Lesch, and sometimes includes contributions from Old Mac Bloggit, the well-known pseudonym.

We no longer host the AMTSO blog, and  I don’t do any administration on the main AMTSO site any more. I do, however, maintain an independent AV-testing blog/resource called, imaginatively, Anti-Malware Testing, and this archives most of the articles I originally posted on the old AMTSO blog – of course, they do not represent AMTSO’s official views. I also blog occasionally at other sites, include Infosecurity Magazine,  (ISC)2 and Securiteam. I used to flag current articles, papers, blogs and media coverage at The Geek Peninsula (most of this is also tweeted via http://twitter.com/DavidHarleyBlog/) but I was having trouble remembering to update it. I’m now using it as a repository for (most of) my papers, some of my articles, pointers to my current and past blogs, and so on.

If you find any broken links on this site please let us know so we can fix them and please use the contact page to get in touch. Thank you.

David Harley
Small Blue-Green World
ESET Senior Research Fellow

Advertisements

EICAR Performance Testing Paper

Back to ESET White Papers

This is a paper called “Real Performance?” written by Ján Vrabec and myself and presented at the 2010 EICAR Conference in Paris in May, available by kind permission of EICAR.

Abstract:

 The methodology and categories used in performance testing of anti-malware products and their impact on the computer remains a contentious area. While there’s plenty of information, some of it actually useful, on detection testing, there is very little on performance testing. Yet, while the issues are different, sound performance testing is at least as challenging, in its own way, as detection testing. Performance testing based on assumptions that ‘one size [or methodology] fits all’, or that reflects an incomplete understanding of the technicalities of performance evaluation, can be as misleading as a badly-implemented detection test. There are now several sources of guidelines on how to test detection, but no authoritative information on how to test performance in the context of anti-malware evaluation. Independent bodies are working on these right now but the current absence of such standards often results in the publication of inaccurate comparative test results. This is because they do not accurately reflect the real needs of the end-user and dwell on irrelevant indicators, resulting in potentially skewed product rankings and conclusions. Thus, the “winner” of these tests is not always the best choice for the user. For example a testing scenario created to evaluate performance of a consumer product, should not be used for benchmarking of server products.

There are, of course, examples of questionable results that have been published where the testing body or tester seem to be unduly influenced by the functionality of a particular vendor. However, there is also scope, as with other forms of testing, to introduce inadvertent bias into a product performance test. There are several benchmarking tools that are intended to evaluate performance of hardware but for testing software as complex as antivirus solutions and their impact on the usability of a system, these simply aren’t precise enough. This is especially likely to cause problems when a single benchmark is used in isolation, and looks at aspects of performance that may cause unfair advantage or disadvantage to specific products.

This paper aims to objectively evaluate the most common performance testing models used in anti-malware testing, such as scanning speed, memory consumption and boot speed, and to help highlight the main potential pitfalls of these testing procedures. We present recommendations on how to test objectively and how to spot a potential bias. In addition, we propose some “best-fit” testing scenarios for determining the most suitable anti-malware product according to the specific type of end user and target audience.

Download – EICAR: Real Performance paper

David Harley 
Security Author/Consultant at Small Blue-Green World
ESET Senior Research Fellow
Mac Virus Administrator

 

Making Sense of Anti-Malware Comparative Testing

[To return to ESET white papers page click here: http://www.eset.com/threat-center/blog.]

This is an Elsevier article preprint of an article on the main issues around comparative testing of antivirus/antimalware products, made available here by permission of Elsevier.

The fully formatted, proofed and reviewed version is available at http://dx.doi.org/10.1016/j.istr.2009.03.002.

Abstract:

If there’s a single problem illustrating the gulf between the anti-malware industry and the rest of the online world, it revolves around the difficulties and misunderstandings that plague product testing and evaluation. This article considers these issues and the initiatives taken by the anti-malware and testing sectors to resolve some of them.

Execution Context in Anti-Malware Testing

[Go back to ESET White Papers page.]
[Go back to ESET blog.]

This is one of my 2009 papers, presented by Randy Abrams and myself on behalf of ESETat the EICAR 2009 Conference in Berlin.

Abstract

Anti-malware testing methodology remains a contentious area because many testers are insufficiently aware of the complexities of malware and anti-malware technology. This results in the frequent publication of comparative test results that are misleading and often totally invalid because they don’t accurately reflect the detection capability of the products under test. Because many tests are based purely on static testing, where products are tested by using them to scan presumed infected objects passively, those products that use more proactive techniques such as active heuristics, emulation and sandboxing are frequently disadvantaged in such tests, even assuming that sample sets are correctly validated.

Recent examples of misleading published statistical data include the ranking of anti-malware products according to reports returned by multi-scanner sample submission sites, even though the better examples of such sites are clear that this is not an appropriate use of their services, and the use of similar reports to generate other statistical data such as the assumed prevalence of specific malware. These problems, especially when combined with other testing problem areas such as accurate sample validation and classification, introduce major statistical anomalies.

In this paper, it is proposed to review the most common mainstream anti-malware detection techniques (search strings and simple signatures, generic signatures, passive heuristics, active heuristics and behaviour analysis) in the context of anti-malware testing for purposes of single product testing, comparative detection testing, and generation of prevalence and global detection data. Specifically, issues around static and dynamic testing will be examined. Issues with additional impact, such as sample classification and false positives, will be considered – not only false identification of innocent applications as malware, but also contentious classification issues such as (1) the trapping of samples, especially corrupted or truncated honeypot and honeynet samples intended maliciously but unable to pose a direct threat to target systems (2) use of such criteria as packing and obfuscation status as a primary heuristic for the identification of malware.

EICAR execution context paper