Star 0

Abstract

Wednesday 30 September 16:00 - 16:30, Red roomHolly Stewart (Microsoft)
Peter Stelzhammer (AV-Comparatives)
Philippe Rödlach (AV-Comparatives)
Andreas Clementi (AV-Comparatives)  download slides (PDF)Most anti-malware tests count each 'miss' equally. If one sample out of 100 is missed, the score for that set is 99 percent, regardless of the sample missed. But should all samples be treated equally? Should vendors receive a lower test score when they miss samples that have victimized more people? Should vendors receive an equal score if they miss the same number of low-prevalence samples, rather than the high-prevalence ones? Even if you agree with the principle that not all misses are the same, how would you factor in polymorphism where a particular sample may impact only one victim, but the malware family impacts millions? How is a sample measured if there is no record of the sample or the family in the wild at all?In this paper, one of the leading comparative testers and other anti-malware industry leaders will take you through several prevalence-weighted models using real-world data from hundreds of millions of computers. We will show how the prevalence-weighted models compare to the standard method of scoring sample detection. In the session, we'll discuss each model's benefits, deficits, and the lessons learned along the way.Click here for more details about the conference. 

Slides

Videos