Before you get to the blogs further down…

[Updated 30th December 2018]

Welcome! Check out the links on the menu above to find out about Small Blue Green World. This is the gateway to the various blogs and bits and bobs that have constituted the SBGW presence on the web.

Essentially, this is a consultancy offering services to the security industry, launched by David Harley in 2006 and with (until 31st December 2018) one main customer, so this particular page hasn’t been maintained very regularly: it has no pressing commercial/advertising function, but it includes some papers/resources that may not be available elsewhere.

I’m no longer working with ESET, and am not looking for another permanent role in the security industry, but might be tempted by the occasional editing/reviewing job.

Some articles and conference papers can’t be posted on a commercial site for copyright-related reasons, so I tend to post them on this site instead. When I remember. Specifically, most of that stuff is now posted to Geek Peninsula.

AVIEN (formerly the Anti-Virus Information Exchange Network), which was run as an independent organization by myself and Andrew Lee (and before that by founder Robert Vibert), has its own blog page hosted there, but I’m no longer heavily associated with the organization except as an occasional blogger there. As I’m not longer working within the security industry, I don’t plan to continue blogging there in the foreseeable future, but there are several years worth of resource pages that might be useful to someone.

I ran several other specialist security blogs completely independently of ESET, and these included a blog focused on hoaxes, spam, scams and similar nuisances, and another that focused (mostly) on Apple malware: essentially, it was the reincarnation of the old Mac Virus web site originally founded by Susan Lesch, and sometimes included contributions from Old Mac Bloggit, the well-known pseudonym. Again, it’s not currently maintained.

We stopped hosting the AMTSO blog. I did, however, maintain an independent AV-testing blog/resource called, imaginatively, Anti-Malware Testing, and this archives most of the articles I originally posted on the old AMTSO blog – of course, they do not represent AMTSO’s official views. I no longer blog at  Infosecurity Magazine,  (ISC)2 or Securiteam.

I used to flag current articles, papers, blogs and media coverage at The Geek Peninsula (most of this is also tweeted via DavidHarleyBlog/) but I was having trouble remembering to update it. I’m now using it as a repository for (most of) my papers, some of my articles, pointers to my current and past blogs, and so on.

If you find any broken links on this site please let us know so we can fix them and please use the contact page to get in touch. Thank you.

David Harley
Small Blue-Green World


EICAR Performance Testing Paper

Back to ESET White Papers

This is a paper called “Real Performance?” written by Ján Vrabec and myself and presented at the 2010 EICAR Conference in Paris in May, available by kind permission of EICAR.


 The methodology and categories used in performance testing of anti-malware products and their impact on the computer remains a contentious area. While there’s plenty of information, some of it actually useful, on detection testing, there is very little on performance testing. Yet, while the issues are different, sound performance testing is at least as challenging, in its own way, as detection testing. Performance testing based on assumptions that ‘one size [or methodology] fits all’, or that reflects an incomplete understanding of the technicalities of performance evaluation, can be as misleading as a badly-implemented detection test. There are now several sources of guidelines on how to test detection, but no authoritative information on how to test performance in the context of anti-malware evaluation. Independent bodies are working on these right now but the current absence of such standards often results in the publication of inaccurate comparative test results. This is because they do not accurately reflect the real needs of the end-user and dwell on irrelevant indicators, resulting in potentially skewed product rankings and conclusions. Thus, the “winner” of these tests is not always the best choice for the user. For example a testing scenario created to evaluate performance of a consumer product, should not be used for benchmarking of server products.

There are, of course, examples of questionable results that have been published where the testing body or tester seem to be unduly influenced by the functionality of a particular vendor. However, there is also scope, as with other forms of testing, to introduce inadvertent bias into a product performance test. There are several benchmarking tools that are intended to evaluate performance of hardware but for testing software as complex as antivirus solutions and their impact on the usability of a system, these simply aren’t precise enough. This is especially likely to cause problems when a single benchmark is used in isolation, and looks at aspects of performance that may cause unfair advantage or disadvantage to specific products.

This paper aims to objectively evaluate the most common performance testing models used in anti-malware testing, such as scanning speed, memory consumption and boot speed, and to help highlight the main potential pitfalls of these testing procedures. We present recommendations on how to test objectively and how to spot a potential bias. In addition, we propose some “best-fit” testing scenarios for determining the most suitable anti-malware product according to the specific type of end user and target audience.

Download – EICAR: Real Performance paper

David Harley 
Security Author/Consultant at Small Blue-Green World
ESET Senior Research Fellow
Mac Virus Administrator


Making Sense of Anti-Malware Comparative Testing

[To return to ESET white papers page click here:]

This is an Elsevier article preprint of an article on the main issues around comparative testing of antivirus/antimalware products, made available here by permission of Elsevier.

The fully formatted, proofed and reviewed version is available at


If there’s a single problem illustrating the gulf between the anti-malware industry and the rest of the online world, it revolves around the difficulties and misunderstandings that plague product testing and evaluation. This article considers these issues and the initiatives taken by the anti-malware and testing sectors to resolve some of them.

Execution Context in Anti-Malware Testing

This is one of my 2009 papers, presented by Randy Abrams and myself on behalf of ESET at the EICAR 2009 Conference in Berlin.


Anti-malware testing methodology remains a contentious area because many testers are insufficiently aware of the complexities of malware and anti-malware technology. This results in the frequent publication of comparative test results that are misleading and often totally invalid because they don’t accurately reflect the detection capability of the products under test. Because many tests are based purely on static testing, where products are tested by using them to scan presumed infected objects passively, those products that use more proactive techniques such as active heuristics, emulation and sandboxing are frequently disadvantaged in such tests, even assuming that sample sets are correctly validated.

Recent examples of misleading published statistical data include the ranking of anti-malware products according to reports returned by multi-scanner sample submission sites, even though the better examples of such sites are clear that this is not an appropriate use of their services, and the use of similar reports to generate other statistical data such as the assumed prevalence of specific malware. These problems, especially when combined with other testing problem areas such as accurate sample validation and classification, introduce major statistical anomalies.

In this paper, it is proposed to review the most common mainstream anti-malware detection techniques (search strings and simple signatures, generic signatures, passive heuristics, active heuristics and behaviour analysis) in the context of anti-malware testing for purposes of single product testing, comparative detection testing, and generation of prevalence and global detection data. Specifically, issues around static and dynamic testing will be examined. Issues with additional impact, such as sample classification and false positives, will be considered – not only false identification of innocent applications as malware, but also contentious classification issues such as (1) the trapping of samples, especially corrupted or truncated honeypot and honeynet samples intended maliciously but unable to pose a direct threat to target systems (2) use of such criteria as packing and obfuscation status as a primary heuristic for the identification of malware.

EICAR execution context paper