VX Heaven

Library Collection Sources Engines Constructors Simulators Utilities Links Forum

A Radical New Approach to Virus Scanning

Joe Wells
CyberSoft, Inc.
September 1999

[Back to index] [Comments]

Note to the Reader

Two things are obvious. Your time is precious to you, and this is an awfully long paper.

So I had better justify it's length here and now.

Just over six years ago I started the WildList. [1] Before it existed, nobody knew exactly which viruses were actual threats. From its beginning, the WildList was a cooperative effort. It was the work of many hands - many volunteer hands. The WildList evolved. It's had problems. It's gone through changes. It's become widely known and respected. It's clearly identified the virus threat. But it's done a lot more than that.

The WildList changed the entire course of the antivirus industry.

I started the WildList for a reason. Before the WildList, developers and reviewers dealt with all the viruses they received. There was no differentiation between those that were actually spreading and those that weren't. I saw cases where products claimed to detect more viruses than their competitors, but were missing viruses that were actually spreading. I created the WildList to show developers what the real threat was, so users - so you - would be protected.

That goal has been attained. I can confidently state that nearly all antivirus products on the market today will successfully protect you from the virus threat. But that goal is only the halfway marker. Antivirus products today do protect you, but that protection is far from optimal. There is a major problem inherent to virtually all products on the market. That problem must be confronted and solved.

That means there's still another goal we need to reach. When that goal is reached you will benefit greatly. Then, and only then, will you have optimal virus protection. But, to be honest about it, I don't want to wait another six years.

So I've prepared this paper. Its purpose is to prove thoroughly, exhaustively, and irrefutably the existence and nature of that problem. It also presents a practical real-world solution. In the process, it provides substantial proof of my claims about the both the problem and the solution. I've tried to foresee potential objections to my solution, so I can handle them now. And on top of it all, I have backed up the claims made in this paper by developing a product for CyberSoft that implements that solution. That too is discussed, but in less detail.

My purpose here is simple. I plan to change the entire course of the antivirus industry - again. That's why this paper is so long.

So please bear with me. Understanding this information is critical in dealing with the virus threat.

My hope is that this paper will clearly bring these issues to light - to enlighten you, the user - and to enlighten developers and reviewers as well. As you explore the depths of this paper, it might help you along the way if you keep in mind the motto of the WildList Organization International (which well expresses the motivation behind both the WildList and this paper).

Out of darkness light will shine. [2]

You Have a Problem

Reality Check

The problem has an obvious, negative impact on you. You may be aware of it, but you probably don't recognize its true nature. This paper will shed light on both the problem and its true nature.

For the sake of clarity, I must point out that we are not going to discuss the virus problem that problem you recognize. Don't expect this paper to be about a virus problem. To the contrary, it's actually about your having an antivirus problem.

Back in 1996, at the annual NCSA virus-related conference, I presented a paper entitled "Reality Check: Stalking the Wild Virus."[3] Substantial sections of the paper focussed on the problem we will discuss here. It was not the first paper on the problem, and I am neither the only nor the first person to point out this problem.

The Reality Check paper did more than simply identify the problem. It accused the entire antivirus industry of creating and perpetuating the problem by doing you, the end user, a major disservice. The disservice involved the way in which antivirus product developers and distributors had exaggerated the actual virus threat through product marketing. The paper demonstrated how the problem began and showed why it had a major, negative impact on the public. Furthermore, the paper extended that accusation by laying part of the blame on organizations that tested, recommended, and certified antivirus products.

I must admit that both the antivirus industry and testing bodies have made some progress in the right direction since 1996. But that progress is perfunctory at best. For the most part the antivirus industry's misrepresentation has continued. Both the problem and its negative impact on you have grown steadily worse.

My 1996 paper also presented a practical solution to the problem. But, until recently, nobody had implemented the solution, no antivirus company, no antivirus test body. Recently, however, one antivirus company has started implementation of the solution.

Note. Before we continue, I feel it's only fair to tell you that the company I refer to is CyberSoft, Inc. and that the implementation involves an antivirus engine I designed and developed for them. That engine has been implemented in two CyberSoft products VFind and Wave Antivirus. My informing you of this fact up front should help maintain a fair perspective as you weigh the evidence I present in the remainder of this paper - especially where I cite those products to illustrate implementation.

Therefore, I ask you to bear in mind that this paper is not intended to sell you on VFind or Wave. Rather, my intent here is to provide evidence to support two claims - that the problem exists, and that there is a solution. To this end I will be talking about how the solution is implemented in the CyberSoft products. These are the only products available that substantiate my reasoning.

Still, the fact that only one antivirus company has implemented the solution does raise a couple of questions. Why would only one antivirus company take steps to solve this problem? If antivirus companies have been accused of creating a problem for users and have been presented with a solution, why haven't more acted?

There are two reasons they have not acted.

Totally Radical

The first reason is that my proposal was radical.

Thus, implementing it would also be radical. Innovating radical changes in an industry means developing and releasing a transition product. Companies know that developing and marketing a transition product can be quite dangerous. Why is it dangerous?

The history of software demonstrates that transition product often don't do well in the market. Moreover, if the changes made are something the competitions' marketing departments can exploit, an entire industry may follow suit. Competitors may suddenly announce that they have been developing and are about to release something similar (which is often just a lie made to save face). In fact, there have even been cases where a secondary company falsely claims the innovation as being theirs. Innovations are often introduced by smaller companies, which are more willing to take risks. So amid this marketing maelstrom, the original transition product (and even the entire company if it's smaller) may well drown. That's why radical change is dangerous for a company.

Still, in spite of these potential problems, and even in the face of possible disaster, transition products do come out. Radical change happens and end users benefit. Companies actually do take risks for the users' sake.

It all boils down to priorities. All companies must set one top priority. That top priority represents interests. It will be either user interests or company interests. A company can't set both as equal priorities, because there will be conflicts. Where the two conflict, every company must make hard decisions. Keep in mind also the fact that larger companies must answer to stockholders.

While antivirus companies usually claim their top priority is protecting users, is that true? If a company actually does set user interests as its top priority, development decisions will reflect this fact, especially if they're radical.

You should now understand the first reason my solution has not been implemented until just recently (and even now by only a single company).

To summarize:

Welcome to Ground Zero

The second reason my solution has not been implemented is that some professionals in the antivirus industry disagreed with my solution.

Right after my presentation in 1996, one colleague came up and asked if my proposal was really serious - bear in mind that this solution is indeed radical.

In response to Reality Check, a seemingly rational argument was cited to refute and thereby invalidate my solution. Indeed, it was claimed the argument showed my solution would itself be a disservice to users. On top of that, it was even predicted that implementation of my solution would prove disastrous for users.

Here, we will refer to this argument as the "zoo escape" argument.

The argument cited was nothing new and had already been shown to be fallacious. The argument was purely theoretical - there was no evidence to substantiate it. At this time, the argument still has no supportive evidence, while evidence against it has steadily increased. The probability of the Reality Check solution being wrong was miniscule to the point of irrelevance in 1996. Today, there is no doubt about its validity.

At this point, you probably don't understand this second reason entirely. Fear not. The "zoo detection argument" will be thoroughly dissected and examined below.

Road Map

To give you an overview of what follows, here's a synopsis of the sections that follow.

In the Lies, Damn Lies, and Marketing section we'll look at how the problem began and grew all out of proportion - and how both the antivirus industry and product reviewers fueled it. We'll see that evidence of the problem's existence and nature was available from the start - and how that evidence was collected, reported on, and largely ignored.

Zooicidal Tendencies will show how both antivirus developers and product reviewers are trapped in a vicious cycle that is still driving the problem. I will then outline the solution and present the case for it by presenting logical arguments backed by solid evidence.

We'll explore Bringing Balance to the Force by exhaustively analyzing the cornerstone argument presented by opponents of the solution. By applying the principles of sound reasoning, we'll crack and crumble that cornerstone.

In the section, Who Does Your Thinking, we prepare for the future by reasoning on the opposition's objections before they can voice them.

In the section, Quoth the Raven, we'll talk about how the solution might be implemented, and how it actually has been implemented.

Finally, the Out of Darkness section summarizes the evidence and conclusions presented, and demonstrates how you benefit from the information.

Lies, Damn Lies, and Marketing

Perfidious Priorities

You've been betrayed. Your interests have been ignored. You've been done a disservice and now you have a major problem. Let's look at the origin and history of your problem.

The problem began in 1991. Norton AntiVirus (NAV) version 1.0 had been released late in the previous year. It had a database of 142 viruses (with variants). [4] What followed was a marketing war. In my paper I referred to this war for market share as a numbers game, because it involved claims about the number of viruses detected.

Be aware that I am not accusing any single antivirus developer of intentionally doing users a disservice or creating the problem. In Reality Check my charges were against the antivirus industry as a whole. As a result of this war, the industry as a whole set priorities that mislead users.

Reality Check traced the early history of the numbers war as follows.

Soon after [the release of NAV], Central Point Anti-Virus (CPAV) was first released. On the box was the claim that it "Recognizes and removes more than 400 viruses." While this numbers game of claiming to detect more viruses already existed, the arms race now began in earnest.

Soon a new CPAV appeared that detected "over 600" viruses. Then NAV 1.5 appeared that detected "over 700" viruses and the CPAV that had said "600" was re-released with a sticker claiming "800", then a NAV at 1000, a CPAV at 2000, a NAV at 2500... [5]

Do you get the point?

Well, users got the point. The media got the point. Reviewers got the point. And the point they got was the point developers were preaching: "Detection rate is critical. An antivirus product must detect all known viruses."[6]

But is that that true? Must an antivirus product detect all known viruses?

Well, if it is true, then antivirus products today must detect the tens or thousands of known viruses. If it is true, then the higher a product's detection rate is, the better it is for you.

But what if this claim is false? What if high detection rate is not necessary? In fact, what if it's completely worthless, even detrimental? If that were the case, high detection rates might actually be undesirable. That would mean that the higher a product's detection rate is, the worse it may be for you.

My premise both in 1996 and now is that the claim of high detection rates is false. I am saying that you have been lied to.

Again. Please don't take me wrong. I'm not saying that antivirus developers have been purposefully, or even knowingly, dishonest. But even well meaning or unintentional deception can weave a tangled web. In like manner, my "Reality Check" paper asked the question, "Is the antivirus industry trapped in a mire of its own making?" My reply was:

Yes. Trapped. And having trouble staying afloat. That mire has swallowed whole companies down into oblivion. Moreover, it has trapped, not only antivirus vendors, but also users and product reviewers. The mire was created by the antivirus industry itself, just about five years ago [in 1991]. [7]

Honestly, I can't say whether or not antivirus companies were unknowing victims of their own marketing mantra "all known viruses." I simply don't know. But I do know that the evidence against this marketing claim, the evidence in favor of my solution, has been available for some time now.

Please, consider that evidence.

Reportive Definitions

To understand what follows you need to understand the difference between two distinct groups of computer viruses (zoo viruses and wild viruses) and two types of product testing (zoo testing and WildList testing). Here we give the reportive definitions. Later on we will add a stipulative definition for "zoo viruses."[8]

Zoo viruses
These are also called in-the-zoo viruses, which are occasionally abbreviated as ItZ viruses. This group of viruses exist only in virus collections. That is, they are viruses that exist and are known, but they are not in the wild. They are not spreading on peoples computers, rather they are stored in zoos kept by developers, researchers, testers, hobbyists, and the virus writers' themselves. By definition, zoo viruses are no in the wild. Therefore, zoo viruses pose no proven threat to you.
Wild viruses
These are also called in-the-wild viruses, which is often abbreviated as ItW viruses. This group of viruses constitutes those viruses that have been reported and verified as actually infecting users' computers. A virus that actually infected someone's computer by spreading from someone else's computer is considered a wild virus. So wild viruses are the ones spreading in the real world. Wild viruses are, to some degree, a real threat to you.
Zoo testing
This involves testing a product's detection rate against zoo viruses. Most testing bodies today place less emphasis on zoo testing than on WildList testing since zoo viruses are not a threat to users.
WildList testing
This involves testing a product's detection rate against wild viruses. Most testing bodies today treat WildList testing as a critical measure of a product's worth. Obviously this is a good thing. Products that do poorly in WildList testing are a poor shield against the clear and present virus danger.

The "WildList" that lends its name to this form of testing is a monthly list of viruses that have been verified as being in the wild. It is also widely known as the "Joe Wells WildList." (That's because I was the person who created it back in 1993.) The WildList is now maintained by the WildList Organization International (WLO of which I am CEO). The WildList exists through the cooperative effort of many volunteers. Over 50 reporters from over 40 countries work together to identify and report exactly which viruses are spreading. This group represents all the major antivirus companies and many smaller companies as well.

In the remainder of this paper, unless otherwise stated, virus scanning specifically refers to methods of detecting known viruses - as, for example, by using signatures.

With the above definitions in mind, consider the following:

The Odds are in Your Favor

Since early in the 1990's it has been recognized that zoo viruses greatly outnumber wild viruses. For example, in 1992 IBM researchers reported:

During the last two years, the number of viruses that we have seen in real incidents has consistently been in the range of 15% to 20% of the total number in our collection, and a majority of these have only been seen once or twice. [9]

In addition, the actual percentage of "all known viruses" that are wild viruses has been steadily decreasing. In "Reality Check" I reported the situation as follows:

On February 12, 1996 S&S International's web site reported that its researchers had identified 8056 viruses. That same day I released, the February, 1996 WildList, a cumulative report of viruses currently verified in the wild by top virus professionals. [10] That report showed 184 viruses as verified as currently in the wild by two or more participants.

To further illustrate this, note that S&S international reported for calendar 1995 that they had verified only 72 viruses in the wild and received unverified reports for an additional 23 more. Moreover, only 25 of the verified viruses had been seen more than two times. [11]

Likewise, IBM's 1995 summary shows only 96 viruses reported, and only 48 of those more than twice. [12]

The WildLists's 184 viruses represent only about two percent of all known viruses. S&S's 97 reported viruses and IBM's 96 each constitute only about one percent of all known viruses.

Such a small amount, a mere handful of viruses. Yet these few, in reality, constitutes the entire virus threat. [13]

Today some antivirus products are claiming a virus detection rate approaching 50,000. Yet the number of viruses being reported as being even a minor threat still number under 250 - that is, under one half of one percent.

So historically, what percentage of all known viruses have been an actual threat?

Does this mean the virus problem is going away?

Of course not, the virus problem has been getting progressively worse. What it does mean is that the number of "all known viruses" is far outstripping the number of wild viruses. It means the increase is almost entirely in zoo viruses. But, even though the number of zoo viruses has increased, the threat from them has been reduced.

Jurassic Zoo Theory

During my time working in, and later running, the virus analysis lab at Symantec's Peter Norton Group, I noticed a trend in the way the prevalence of different virus types changed. By the time I had left Symantec and joined IBM, I had developed a theory as to why the changes had occurred.

One day, when I was working with the IBM antivirus team at Thomas J. Watson Research, we were discussing a chart of virus type prevalence over several years. At a point in the chart of it was obvious that DOS file viruses began to drop off and boot viruses continued to rise. Since I was already aware of this trend and had a theory, I presented it. I pointed out that the time frame involved roughly coincided with the market growth of Windows 3.1. I suggested that file viruses didn't do well in the Windows environment, but that boot viruses did.

Not long after, Steve White and Jeff Kephart presented the paper "The Changing Ecology of Computer Viruses" at the 1996 Virus Bulletin Conference. Concerning the evidence in that chart they stated:

Then something curious happened. Boot viruses continued to rise as you would expect, but files viruses experienced a significant drop. Something, somewhere was decimating their population.

It took several years before we understood why this was happening. During the early 1990's Microsoft's Windows 3.1 became a common operating system. And Windows 3.1 has an interesting property: if your system is infected with a typical file virus, Windows 3.1 becomes fragile and won't start. The world's computing environment quickly became a very caustic place for file viruses. As their death rate skyrocketed, their population dwindled. [14]

Contrary to popular expectations, changes in the world's computing environment have had a huge effect on which viruses spread, and which go extinct. In the decade since computer viruses were first written, we have seen large swings in their population.

File viruses were decimated by Windows 3.1, an environment that was very hostile to these viruses. [15]

While it's true that this was originally my theory, that's all it was, a theory. Full credit for proving it goes to the IBM team. As that paper's section for acknowledgments indicates, "We also thank Joe Wells for his suggestion, later verified experimentally, that most boot viruses can spread from within Windows, while most file viruses cannot." [Italics mine.]

What I theorized and IBM proved about trends in DOS file virus extinction effects you directly. Members of an endangered species are becoming increasingly rare in the wild. No wonder nearly all of them are found only in zoos - they simply can't cut it anymore. Who's afraid of dinosaurs?

Stuck in the Scan Age

Another consideration has direct bearing on your problem. For several years now, most antivirus products have been shipped with technologies that greatly reduce the role played by known virus scanning.

In 1994, two years before the "Reality Check" paper, I was interviewed by Virus Bulletin. In that interview I pointed to the limited value of scanning for known viruses.

Most anti-virus reviewers today are "stuck in the scan age," according to Wells, and have no concept of how to test and review integrity systems. "So they keep feeding their readers the lie that detection rate is everything."

He believes that integrity systems will be the way forward into the next century, although an anti-virus product should at the very least know all the known all currently in-the-wild viruses. It should be able to clean up a system, and then install a good integrity management system.

"After installation," he observed, "the combination real-time and interactive integrity systems can handle the new viruses that appear." He believes that anti-virus products will develop along both generic and specific lines, but with virus-specific detection being crucial only for installing a more intelligent system. [16]

When that article appeared, Richard Ford was the editor of Virus Bulletin. In that role he oversaw a number of comparative reviews that included zoo testing. In 1996, (at which time he was no longer editor) he presented a paper at the Virus Bulletin Conference. The paper was entitled, "Certification of Anti-Virus Software: Real World Trends and Techniques." In that paper he addresses several issues, including two we are addressing here - the idea that products are more that just scanners, and the inclusion of zoo viruses in testing being unnecessary.

Concerning the issue of products with integrity checking he points out:

The purpose of a review of anti-virus products is not to determine how well the scanner component of a product detects viruses... The purpose of a review of anti-virus products is to determine how well a product is suited to the job of preventing the damage caused by viruses, without interfering with the normal operation of the computer. Put simply, by concentrating on the detection qualities of scanners alone, we are not necessarily measuring the right thing. [17]

Then, concerning the issue of zoo virus detection, he focuses on the "myth" that a product is good because it is good at detecting obscure polymorphic viruses.

In a review of products, the reader often anxiously awaits the detection results on the "polymorphic" test-set. Here, we are told, is the test set which separates the men from the boys. Once again, I disagree strongly with this mindset - despite the fact that as editor of Virus Bulletin, I myself commissioned and edited several such reviews... The problem lies in what this test is supposed to measure in the real world. [18]

Concerning these two issues he draws the following conclusions:

In a well-formed certification process, the ability of the entire product to protect the machine from disruption by a virus should be measured. Thus, a review methodology must be suitably constructed to allow for the inclusion of tests which measure the effectiveness of alternate approaches to virus protection. This will prevent the current trend of scanner-centric reviews from excluding possibly new technologies from developing.

Zoo detection, in particular a large polymorphic library, is not required for a good certification scheme. Rather, a threat library which determines whether the product is capable of providing protection from all types of self-replicating code should be used. [19]

Today, most antivirus products have some form of integrity checking system. Such systems have a distinct advantage over virus-specific scanners. Integrity systems look for unknown viruses. Scanners look for known viruses. To the integrity system, all viruses are unknown, including those known to the scanner. Therefore, viruses handled by a scanner, are just a subset of those handled by the integrity system.

When an integrity system is the main defense perimeter the scanner's role is reduced. The scanner becomes critical in only four areas.

  1. On installation, before the integrity defense perimeter is established.
  2. At entry points in the perimeter (wherever new files arrive).
  3. For objects that don't fall within the perimeter (e.g. floppies and macro virus targets).
  4. On updates, to perform a single check on objects within the perimeter.

Therefore, in the case of more intelligent antivirus systems, this reduces the importance of virus specific detection, and thus virus detection by known-virus scanning, even more. Since the viruses dealt with at these weak links in the perimeter will virtually always be known wild viruses, this approach to detection further reduces the nano-potential of falling victim to a zoo virus.

So why are we discussing zoo viruses if they're not a threat? What's the point?

The point is this: There are tens of thousands of viruses you'll never get. There should be no reason for us to discuss them. But we must discuss them. Because, whether you realize it or not, those tens of thousands of viruses do affect you. They do have a direct impact on you. And their effect is detrimental.

Zooicidal Tendencies

Welcome to Jurassic Zoo

Important Note: Let's make one thing perfectly clear. In this context I will be using the term zoo in a specific way. This is the stipulative definition [20] of the term that limits its usage. Herein zoo, DOS zoo, and old zoo are specifically defined as "all the older DOS file infecting viruses that have never been reported in the wild." This definition excludes other zoo viruses - boot viruses, macro viruses, and viruses infecting Windows executables - which have never been reported in the wild. So, by extension, zoo testing here means testing against these older viruses.

The old zoo represents tens of thousands of viruses, many of which don't even work. The probability of your getting any one of these is microscopic, virtually zero. These are the viruses that researchers don't even bother with. The only time they get any attention is when they appear as a false positive. That is, when an innocent file is wrongly identified as one of these zoo viruses. In fact, most false positives come from these viruses, especially the older polymorphic-engine viruses and those written in high level languages (whose names start with HLL).

Most antivirus products detect these viruses.

Why? These viruses aren't a threat. They will never be found on your system or network. So why do antivirus products detect them?

Why do you think they detect them? To protect you?

From what?

Let me shed some light on this.

I recently exchanged email with a reviewer. Looking at his results it was obvious that he had some invalid samples in his test set (e.g. none of the products got Michelangelo!). He said some were indeed invalid but still wondered why one product got the lowest score. My reply went something like this:

There are tens of thousands of viruses that users have never and will never actually get. We call them zoo viruses, because they are not "in the Wild". They are not a threat to users. How should antivirus companies deal with these? Well, that depends on the priorities of the antivirus company.

All antivirus companies know that there are tens of thousands of viruses that no user will ever get, and their scanners really don't need to detect these. In fact, several products actually don't look for these in their real-time scanners. They know there is no need to. So why they do leave them in their main scanner?

Its simple logic:

Antivirus companies want to sell product. Reviewers test against zoo viruses. Users buy based on reviews. So scanners detect zoo viruses so they don't look bad and lose sales. [21]

Ok. I oversimplified the issue.

So let's be realistic. Antivirus companies are not purely mercenary. They care about users. But, whether making money is their topmost priority or not, they must do so. If they didn't make money, many of my friends would be unemployed. I'd be unemployed.

In truth, product developers are not entirely to blame. Developers are often at the mercy of product reviewers. It is true that, most reviewers have long since dumped the "all known viruses" approach and focussed on wild viruses, but they are still doing old zoo testing. They are still citing high zoo rates to recommend products. This has not perceivably changed since 1996 when I said:

Product developers have often thrown away time and money to implement detection for zoo viruses (like DSCE.Demo, Cruncher and Uruguay) simply because reviewers have them in their collections. In fact, reviewers tend to force developers to respond by proclaiming that, if a product that doesn't detect some new advanced polymorphic virus, then they must have an incompetent research department.

Interestingly, I was recently taken aback when one reviewer told me why he includes such zoo viruses. He said that this was the only way he could differentiate products. It was the only way he could rate them. I guess that, if he used only wild viruses, he'd have to give all the products the Editor's Pick Award. [22]

This vicious cycle has continued, perpetuated by both product developers and reviewers. It has even escalated. Reviewers still slam products for low zoo detection rates. So developers still scramble to get zoo viruses into their products. Every time a reviewer adds a new, obscure polymorphic virus to their test set, developers still scramble to get a copy of the virus, replicate it and get it into their product. This may even mean adding new code. If it does, then a product upgrade, instead of a signature update, is required.

In 1996 the Reality Check paper proposed a solution. This paper again proposes that solution and expands it. In 1996 I stated:

In the real world, zoo viruses are not a problem. Wild viruses are.

So, there's no compelling reason for antivirus products to detect zoo viruses. [23]

Now, I expand that statement.

In the real world, zoo viruses are not a problem. Wild viruses are.

Thus, there is no compelling reason for antivirus products to detect the former group. To the contrary, there are compelling reasons to remove zoo viruses from both products and from testing. It can be demonstrated that high zoo detection rates are more than simply not beneficial, they are in fact detrimental.

Virus zoos are your problem. They are a very real and present problem.

Not the viruses themselves, but the unnecessary overhead they add to your virus protection. Therefore, removing old DOS zoo viruses from both products and from testing is the solution to your problem.

I hereby claim that so removing old DOS zoo viruses is a reasonable, rational, logical solution to a very real and present problem that has a major, negative impact on you directly.

That said, let's look at the evidence that supports this claim.

Ladies and Gentlemen of the Jury

Sadly, there are virus "experts" who think you're a moron. They still chant the "all known viruses" mantra. They tell you to accept it because they're the experts and you're not. They feel no need to offer evidence, because you're not qualified to evaluate it anyway.

Do you agree?

I certainly don't! That's utter nonsense. Many professional people, outside the antivirus field, have been dealing with viruses for years. I know one manager who's been at it since 1988. That's a year longer than I have. Corporate professionals like Christine Orshesky [24] (aka Christine Trently), Mike Lambert [25], and others have presented brilliant, landmark papers on the virus problem. But even if you haven't dealt with viruses at all, you don't have to be an expert to intelligently evaluate evidence.

Let me illustrate. Suppose you served on a jury. The trial involves complex medical issues. Would you have to be a neurosurgeon to weigh the evidence presented? Of course not. You would listen to the evidence presented, and evaluate it fairly. You might have to ask for clarifications (I certainly would.), but that does not make you unqualified.

In addition, The evidence presented here is not technical in nature. It doesn't involve undocumented CPU instructions, winsock patching, or oligomorphism. It involves common sense and simple arguments.

Not arguments in the yelling and name - calling sense. Logical arguments, defined thus:

"Arguments are the instruments we use in rational persuasion. Whenever we want to convince someone to accept a position we consider correct, we present arguments in its favor." [26]

And again:

An argument is a pair of things:

A set of sentences, the premises.

A sentence, the conclusion. [27]

Arguments like this:

Premise1: Zoo detection provides no substantial increase in virus protection.

Premise2: Zoo detection has a substantial, negative impact on antivirus performance.

Premise3: Users benefit from optimal protection and performance.

Conclusion: Therefore, removing zoo detection benefits users.

Now reason on this: Suppose a developer removed 30,000 or so old zoo viruses from their product. That is, they remove most of the viruses, but only those that aren't a threat.

They do this because each zoo virus unnecessarily increases the product's size. Each one slows the product down a mite. Each one is a false alarm waiting to happen. What would the result be?

Would the product be faster? Would the product be smaller? Would the product have fewer potential false alarms? Would the update downloads be quicker? Would the user is still be fully protected from the "real" virus threat?

As you can see, there's no need for arcane knowledge about viruses or antivirus techniques. Common sense alone is enough to answer yes to each of these questions.

Now ask yourself. What antivirus expert would answer no to these questions? But, if they did answer yes, how would they justify zoo detection in their products? On the other hand, they answered no, you may well conclude that they are the morons.

Now consider the issue from a different angle. What is the impact of high zoo detection rates on the antivirus product developer?

For developers, increasing zoo detection rate adds the following:

Zero additional real-world protection - it's a waste of time. Detection for thousands of viruses that are no threat to you. Lots of unnecessary overhead that reduces speed. Lots of unnecessary overhead that increases size. Lots of wasted man-hours for analysis of junk viruses. Lots of wasted man-hours coding for esoteric viruses that don't even work.

Lots of wasted man-hours in repetitive testing and quality assurance. Lots of potential false positives for HLL and old polymorphic viruses. Lots of vestigial virus signatures that increase downloads times.

The claim made in antivirus marketing and product reviews is that a high zoo detection rate makes a product better. Exactly which item in that list makes it better? I can't find one.

When we shine the light of coherent reasoning on it, the idea that high zoo detection rate make a product better suddenly takes on a new appearance. The idea is bizarre. The unnecessary overhead of bloat and slowness makes software worse, not better. Neither a decrease in performance nor an increase download times are something to brag about. Developers don't print "zero increase in protection" on their box. When a developer squanders all the time and money to make their product worse, who exactly are they benefiting? Not you. Not themselves. No one.

It is therefore quite reasonable to say that high zoo detection rate makes a product worse - not better. To claim otherwise is ludicrous.

Now, imagine a magazine review. All the contenders score 100 percent on the wild test. The winner wins on its high zoo detection rate. Given this scenario, the winner may well be that worst product and the product with the lowest zoo detection score may actually be your best choice.

Let's summarize all of this as a logical presentation of the evidence.

My premises are:

And my conclusion is:

Based on that conclusion, it follows that that we can make the statement:

The solution to your problem is to remove old zoo viruses from antivirus products. By extension, it is also in the your best interest to remove old zoo virus detection from product testing and certification.

"Extraordinary!" Proclaimed Watson.

"No. Elementary." Replied Holmes. [28]

Elementary indeed. Perhaps too elementary.

The above statement tends to oversimplify the real-world solution. I present it here as simply a foundation upon which to build. I say this because it sounds like I'm claiming that absolutely no zoo viruses should ever be detected. That's not what I'm saying. It's not that simple. Later on I'll point this out again when we discuss the important issue of threat - type detection.

The Solution to Your Problem

Simply stated, the solution to your problem is:

Remove unnecessary viruses from antivirus products and product testing. This, in turn, will remove unnecessary overhead without diminishing protection from the actual virus threat. Therefore, users will have optimal antivirus protection.

At this point, certain antivirus experts will leap up and loudly object. They will claim that this solution is ludicrous - that it would put end users in grave danger. To support their objection, they will cite a line of reasoning that is considered the strongest argument in favor of zoo detection. In the introduction I called it the "zoo escape" argument.

To be fair, I must address this counter argument. Indeed, my favorite book on argumentation, Attacking Faulty Reasoning, points out that rebuttal is an often ignored factor that makes an argument sound. The author calls it the "Rebuttal Criterion" and states:

A good argument should also provide and effective rebuttal to the strongest arguments against one's conclusions. [29]

With that in mind, we will now exhaustively examine the "zoo escape" argument in detail.

Bringing Balance to the Force

Fallacy, noun, [L. fallere to deceive] any error in reasoning.

If you're familiar with traditional logic, you've probably noticed how heavily I rely on the use of logical terms and forms. Ok, I admit it. I've been enamored with the art and science of critical reasoning for some 25 years - both as a hobby and professionally in papers, as a public speaker, as an investigative writer, as a research editor, and as a senior editor. My main interest has been in critical presentation, but especially in the area of attacking fallacies. (This should help you understand why this section is rather long.)

I've been meaning to write a book on the use of argumentation and refutation in papers and public speaking. Like other books on logical presentation, it would have a chapter on fallacies. That chapter would be the easiest to write when it comes to examples. All I'd have to do is fill it with arguments in favor of zoo testing. The following may well become my chapter on fallacies.

To begin with, a fallacy is a line of reasoning that masquerades as a sound argument. So to recognize a fallacy, you must understand the nature of a sound argument.

A sound argument has two distinct characteristics. It must be valid, and must be true.

Here a valid argument is defined:

An argument is valid if and only if it is necessary that if all its premises are true, its conclusion is true. [30]

In other words, in a valid argument, if the premises are true then the conclusion has to be true. So if an example can be given where the premises are true, but the conclusion is false, then the argument is not valid.

Even so, an argument that is valid may not be sound. It can be valid in form, but still have false premises. [31] So soundness requires more than just validity. It also requires truth.

An argument is sound if and only if it is valid and all its premises are true. [32]

So in a sound argument the conclusion cannot, under any circumstances, be false. For this reason, a sound argument constitutes evidence that is considered to be logical proof.

When presented to support a claim, a fallacious argument is unsound for one of the following reasons.

  1. The premises are not relevant to the conclusion. (invalid)
  2. The premises are relevant, but are inadequate to support the conclusion. (invalid)
  3. The premises make a wrong assumption. They simply are not true. (untrue) [33]

Now, armed with a knowledge of what constitutes sound reasoning, we will analyze the most common proof presented in support of zoo testing.

The Power of the Dark Side

The "zoo escape" argument is the most common argument I've heard from proponents of zoo detection. It goes something like this:

Thousands of old zoo viruses are available on the Internet.

Someone could download one and release it into the wild.

There is no way to predict exactly which virus might be released.

Therefore, scanners must detect all these viruses to protect users.

This appears to be a sound, logical argument. The premises (the first three lines) appear to be true. The conclusion (the last line) seems to follow.

Therefore, if it can be demonstrated that the above argument is valid, and if it can be demonstrated that the premises true, then the conclusion has been proven true. It would then follow that this argument provides sound evidence supporting zoo virus detection.

On the other hand, if the argument can be demonstrated as invalid, untrue, or both, then the conclusion is not supported. In which case the argument provides absolutely no support for zoo virus detection.

The (not so) Great Escape

Let me begin by placing the truth directly before you.

The "zoo escape" argument is specious - that is, it is a falsehood that has a deceptive appearance of truth.

To substantiate this claim, please consider the following evidence.

Notice first that this argument is theoretical. It speculates that a virus could be downloaded and might be released. It provides no evidence that it has or will happen. Yet even though the premises are totally conjectural, the conclusion is not-could and might suddenly take on substance to declare what must be done.

Note that the argument is quite similar to what is called the slippery slope fallacy. This fallacy is defined thus:

The mistaken idea behind the slippery slope fallacy is that when there is little or no significant difference between adjacent points on a continuum, then there is no important difference between even widely separated points on the continuum. [34]

The effect is that miniscule pieces of evidence are used to obscure the overall claim. One virus could be downloaded takes a quantum leap to justify all viruses being detected. And in this case, the pieces of evidence are non existent.

Therefore the "zoo escape" argument is not valid because the premises do not support the conclusion, let alone guarantee it.

Next notice that the argument also represents the false dilemma fallacy. This fallacy is defined thus:

The false dilemma fallacy consists of giving arguments that present alternatives as exhaustive and exclusive when they are not. [35]

By saying "all" zoo viruses must be detected, the argument allows for no latitude. Of course, if I were to say that "only" wild viruses should be detected, I too would be presenting a false dilemma. The real-world solution is not that clear cut.

As evidence that "all zoo viruses must be detected" is a fallacy, I present the fact that many zoo viruses are completely non-functional. They cannot spread in the real world. Yet antivirus products detect these as viruses.

For example, many current antivirus products detect certain non-functional macro "viruses" with names that end with the term "intended." That means that it was intended by the programmer to be a virus, but doesn't actually work-it doesn't actually infect anything.

Now ask yourself:

Bear in mind that these intended viruses actually are in zoos. They actually are in many products. They actually are used in product reviews.

How does this fact bear on the "zoo escape" argument? It demonstrates unsoundness. These "intended" viruses alone demonstrate the premises to be false, because they are in zoos on the Internet, but cannot escape and spread.

Consider also that this is an old argument, yet it remains conjectural. Why? Because over the years what it says could happen, never has. I've been tracking viruses on the WildList since 1993. In all that time not one of those old DOS virus has suddenly appeared on the WildList. It just hasn't happened. So there is still no actual evidence to support this argument.

So what does the actual, substantial, non-conjectural evidence show?

Wild DOS viruses are becoming increasingly rare.

Moreover, none of the tens of thousands of old zoo viruses have escaped.

So even the theoretical chance of an old zoo virus escaping is diminishing.

Once again, this argument is proven invalid because a case can be cited that invalidates it. Namely, the case in which no zoo virus ever escapes and spreads. And don't you think it's odd that the exception case is the only case that has been demonstrated?

What do we see in the light of this standard argument analysis? We see that the evidence shows the "zoo escape" argument to be invalid in form, lacking in soundness, and fallacious in nature.

But what if it's Gray?

At this point, some may object that I've overstated the claims made by proponents of the "zoo escape" argument. That I'm the one presenting it as a false dilemma. That I'm painting it black and white when it isn't really that clear cut.

When faced with refutation, a person may state that they've been misunderstood and that they need to clarify their actual position. In this case, some may back off from the "zoo escape" argument as I've presented it. Some developers may agree that intended viruses don't have to be in products and that these should be excluded from testing and certification.

In other words, they may redefine the "zoo escape" argument by emphasizing its theoretical nature. Backing off from presenting it as deductive proof, they present instead it as a non-deductive argument. That is, an argument not meant to "prove" a conclusion in the logical sense, but intended only to show that the conclusion is likely rather than inevitable. [36]

Fair enough. Let's assume the "zoo escape" argument is actually non-deductive. Would this change things? Would a non-deductive form of the argument cast doubt on the evidence?

Before answering, let's go one step further. Could the argument be redefined again and again until it finally does invalidate the evidence?

Fear not brave soul. Even multi-headed, polymorphic myths may be slain by the skillfully wielding of simple logic.

Point of fact: Even though non-deductive arguments are not intended to be valid, they can still be evaluated logically, and it can be determined whether or not they are successful. Note the following:

We say that an argument is nondeductively successful if it is nondeductive and its premises make its conclusion more likely than not. Of course, this is a minimal criterion for success. We prefer that premises make conclusions more than just "more likely than not." [37]

With that in mind I ask: Does redefining the "zoo escape" argument to be non-deductive meet the minimal criterion for success? Does rewording it make it at least "more likely than not" that a zoo virus will escape and spread?

No. It does not. Because there is still no evidence to support such a claim. No old zoo virus has ever escaped and the possibility that it even could is diminishing. Whether your prediction of doom is dogmatic or a dodge, dinosaurs don't eat people.

Thus, even non-deductive arguments need some degree of support. In fact the most common form of non-deductive arguments involve numbers-statistical data. That means, conclusions must be based on some measurable probability. Consider this standard non-deductive argument:

Ninety percent of Americans have cable TV.

Joe Wells is an American.

Therefore, it is more likely than not that Joe Wells has cable TV.

This non-deductive argument provides statistical evidence for its conclusion. It does not however prove the conclusion. I don't have cable TV. I haven't had it for years. But the argument is still successful, because there is evidence that most people do have cable.

Presenting the "zoo escape" argument in a similar fashion makes this point clearer.

Thousands of old zoo viruses are available on the Internet.

None of these viruses have escaped and spread.

Therefore, more likely than not, one will escape and spread.

Deductive proofs aside, is that reasonable? Does it make any sense at all? Of course it doesn't, any more than saying:

No American has ever been devoured by a raptor.

Joe Wells is an American.

Therefore, it is more likely than not that Joe Wells was devoured by a raptor.

Before we move on, I should point something out about this idea of redefining, or otherwise "clarifying" an argument. This kind of "clarification" is often done during argumentation to present a moving target. Such maneuvering constitutes another fallacy. It's called the definitional dodge fallacy, which is explained thus:

The definitional dodge consists of redefining a crucial term in a claim to avoid acknowledging a counterexample that would falsify the claim. [38]

So then, where has our analysis of the "zoo escape" argument led us? Speaking for myself, the evidence leads me to state the following conclusion.

Joe's Conclusion

To my knowledge, no old zoo virus has ever escaped and appeared in the wild. Also, the chances of it ever happening are shrinking. Therefore, none of these viruses can be considered a real-world threat. It follows then, that anyone clinging to the "zoo escape" argument, or any variation of it must fall into one of three categories:

  1. Those unaware of the substantial evidence against it.
  2. Those aware of the evidence, but unaware of its implications.
  3. Those aware of both the evidence and implications, who have chosen to ignore both.

Persons in the third category have made a choice. But for what possible reason? No one has demonstrated any "good" reason for such a choice. I would hope it's because they simply haven't been convinced and still believe there is some good reason. But if they don't have a "good" reason, what kind of reason do they have?

I must admit that, after dwelling within the antivirus industry for all these years, the words attributed to J. Pierpont Morgan come to mind.

"A man always has two reasons for what he does-a good one, and the real one."

Selective Use of Evidence and Hasty Generalization

Having reached "Joe's conclusion" brings us to the next fallacy. This fallacy involves selective use of evidence. This type of fallacious reasoning is common in pseudo-scientific research. The difference between valid and pseudo-research may be shown this way:

The normal flow of valid, investigative research is to (a) collect evidence, then (b) evaluate the evidence, and then (c) formulate a theory that explains the information.

The normal flow of pseudo-research is to (a) formulate a theory, then (b) seek out (or possibly fabricate) bits of evidence that supports the theory, all the while (c) ignoring the mass of evidence that invalidates the theory.

Such selective use of evidence is done intentionally. When it is done unintentionally it represents the hasty generalization fallacy (which is explained next). [39] Amazingly, selective use of evidence (intentional or not) has often proven a "fact" that can go unchallenged for years, and even be widely cited after being disproved by analyzing all the available evidence. [40]

Be Warned. This fallacy may well surface in favor of zoo detection as follows:

Somewhere out there, some antivirus researcher will read "Joe's conclusion" above and zealously start looking for an example of an old zoo virus that has appeared and spread. Who knows, they may find one, or even more, that have actually escaped. So, they may publish their results as proof that I'm wrong. They may put forth their evidence to invalidate my arguments and solution. They may even claim that their evidence proves the "zoo escape" argument to be sound.

Since this could conceivably happen, let us assume for a moment that it will happen. This way I can go ahead and respond to their proof here and now.

"Ok. I was wrong." I'll reply. "But that still doesn't prove your argument. If one or more did escape then common sense dictates that these are rare exceptions. If they weren't exceptions then zoo viruses would be escaping all the time. But that's not what history shows."

The point is this. Even if examples did exist, it's still a fallacy. Proponents of zoo testing would be pointing at isolated pebbles of proof, while ignoring the mass of evidence-the mountain of zoo viruses that never have and never will escape. My statements are still valid.

Recall that IBM has proven old DOS file infecting viruses have been steadily moving into extinction since the early 1990's. Up until IBM's 1996 paper, and ever since, that same trend has also been reflected in the WildList.

Now think about the significance of this. The indication is clear. Even if an old zoo virus did escape, the chances of it surviving in the wild, let alone spreading, have been greatly reduced by the computer environment.

Let's put this in perspective. Say you built a cabin out in the wilds. You might be wise to take precautions against wild animals indigenous to that ecotone-mountain lions, grizzly, or wolverines. But who in their right mind would set out raptor traps?

Consider this also. What if an old zoo virus did escape? These are old DOS file viruses. They don't spread like a Melissa or an ExploreZip. They can just barely survive in today's computing environment. If one did escape and survive, antivirus companies would spot them, deal with them, report them to the WildList and the world wouldn't even notice.

By the way, there is also another variation of the "zoo escape" fallacy I should mention. It states that home users are more likely to surf the web and are thus more likely to get a virus-they are at higher risk than business users. Thus, to protect home users a scanner must detect zoo viruses. But, while it may be true that they are more likely to get a virus, it would simply be more likely that they would get a "wild" virus, like anyone else. Probability is still in their favor.

Finally, even if the argument is redefined to be a fuzzy, non-deductive line of reasoning, and then one or more zoo viruses did "escape," it would still represent a fallacy called the hasty generalization fallacy. This fallacy is similar to selective use of evidence, and usually applies to non-deductive arguments.

The fallacy of hasty generalization consists of a generalization on the basis of an inadequate set of cases. As a sample from a larger population, the cases are too few or too unrepresentative to constitute adequate evidence. [41]

So even if several zoo viruses made astonishing escapes and miraculously survived they would still not represent adequate evidence to support the detection of all known zoo viruses.

Out of necessity this section has been rather long. My intent has been to exhaustively analyze the "zoo escape" argument from various angles. In it we've examined evidence that has cracked and crumbled the cornerstone argument in favor of zoo testing. We've also presupposed and replied to potential refutations of that evidence.

We'll now take some time to look at a few more fallacious attacks on my solution that could conceivably crop up. As in the case of the selective use of evidence fallacy, people who have theories, but no real evidence to point to, often use fallacious attacks. Here are some examples we might hear from proponents of zoo detection.

Who Does Your Thinking?


What's the difference between logic and propaganda?

Logic teaches you how to think. Propaganda tells you what to think.

Experts in any field may commit the ex cathedra (with authority) fallacy. (Remember these guys. They think you're a moron.) This fallacy is where you, the non-expert, are told by them, the experts, "all qualified experts agree that this is true." And that's all you're told. They don't need to give you evidence. You're not qualified. You wouldn't understand.

In a similar way, non-experts will cite the opinions of others, whom they consider to be experts. This may constitute the appeal to authority fallacy. As in the ex cathedra fallacy, something is true simply because an expert say it's true, with authority.

In either case, an expert is cited as an authority and, in common usage, the terms are used interchangeably. Therein lies the fallacy. How so? Note what the late Carl Sagan wrote about being skeptical about authorities:

What skeptical thinking boils down to is the means to construct and to understand, a reasoned argument and-especially important-to recognize a fallacious or fraudulent argument... Arguments from authority carry little weight-"authorities" have made mistakes in the past. They will do so again in the future. Perhaps a better way to say it is that in science there are no authorities; at most, there are experts. [42]

For example, in 1947, the speed of sound was recorded to be 741.5 miles per hour. This appeared in authoritative texts down until 1986, when the National Research Council of Canada, accidentally discovered a calculation error made in 1947. The "authoritative" speed was off by nearly one half mile per hour. The speed is now officially 741.1 miles per hour.

Science once thought Newton's Principia Mathematica [43] to be authoritative, but by 1900, discoveries in the field of physics had shown it to be inaccurate. Soon after, Einstein revised Newton's concepts and added new ideas in his 1905 paper "The Electrodynamics of Moving Bodies." Then later, in 1931, the entire course of mathematical logic took off in a radical new direction when Kurt Gödel showed that mathematical logic is provably incomplete. [44]

Of course, many experts think they are authorities. They actually believe that their expert opinion is more than just opinion-it is truth. Similarly, many people do view experts as being authorities.

Experts express expert opinions. Such opinions usually do merit people's trust. But when the opinion affects them in a major way, they may seek out a second opinion. Why? Because it is just an opinion. So, experts who are proponents of zoo detection are not authorities, and neither am I. It would be blind credulity on anyone's part to totally accept either their opinion or mine without knowing the evidence and reasoning upon which that opinion is based.

Note. This is a bit off the subject, but it may help clarify for you why there are no real "authorities" in science (including computer science). Most people don't realize that science is actually based on philosophy. The foundation of scientific method is logic. Logic falls within the field of philosophy. Actually many scientists don't even know this and those that do don't usually talk about it-I guess they don't like admitting that science and religion have the same roots. But like it or not, logic was described and popularized by the Greek philosopher Aristotle. And even today, if you want to get a book on logic, you'll have to go to the philosophy section of the library or bookstore.

Ad Populum

There are two different fallacies called ad populum (to the people). [45] Both apply here, so we'll look at them both.

One fallacy called ad populum appeals to popular opinion as being evidence. The fact is that many developers, reviewers, media people, and users still believe the "all known viruses" myth. So you may well run into it.

"Come on. Everyone knows scanners have to detect all those viruses."

"Ask anyone with a scanner. High detection rates are good."

"Nobody believes that detecting fewer viruses is good!"

"You're stupid not to come with us. How could 42,896 lemmings be wrong?"

The other fallacy called ad populum involves an appeal to emotion, such as fear. This is the scene where you play the role of the helpless victim about to be devoured by unspeakable special effects.

It could happen to you. Old zoo viruses really are out there. How hard would it be for a disgruntled contractor in your workplace to download one and release it in your company? Do want to chance getting a destructive virus? Do you want to chance loosing all your data? Do you want to chance giving it to your customers? Of course you don't. That's why you need a scanner that will protect you from that happening.

This and similar fallacious arguments appeal to emotion rather than to relevant reasons and evidence. Your best defense here, as with most fallacies, is common sense with a dash of cool skepticism. When the laws of probability are stretched by emotional arguments, the improbable becomes quite possible. Soon possibilities evolve into probabilities and certainties.

To recognize such fallacies, simply take things in their real-world context. When asked if you'd "take the chance," think about what the probability actually is. In the case of getting a zoo virus, the "chance" you're taking is far, far than less many other chances you take daily-less than the chances you take when taking a shower, when walking down the stairs, or when eating breakfast, and a hell of a lot less than all those chances you take on your drive to work.

We all take chances every day simply because probability is in our favor. Normal people don't worry about things that very probably will never happen to them.

We can weigh the evidence by asking, "Is protecting against an improbable event worth the cost?" In the current context we would ask, "Is protecting against the improbability of an escaped zoo virus, worth dealing with a bloated, sluggish antivirus product?

So in the real-world context the question would be, "Which do I deal with on a daily basis, escaped zoo viruses or running an antivirus product?"

Should we set out raptor traps?

Ad Hominem

The ad hominem (to the man) fallacy is where attention is shifted by attacking the integrity of person presenting the evidence instead of the evidence itself. [46]

If there is insufficient evidence to support or to refute an argument, people (including experts) attempt to shift attention away from the evidence. This often occurs when someone is in danger of being proven wrong. Sad to say, this fallacy used all too often in the computer world.

Usually this involves a verbal attack. The attack is meant to call into question the victim's credibility. It nearly always centers on something totally irrelevant to the argument. In many cases there is an appeal, not to reason, but to attitudes-especially prejudices. The onlooker's attention is shifted from the person's line of reasoning to the person's competence, past mistakes, quirks, social standing, education, race, religion, health, mental health, acquaintances, coworkers, hair style, astrological sign, anything and everything-except, of course, the facts.

(Note. This does not mean that a person's qualifications have no bearing on evidence being presented. If my dog's veterinarian expressed his expert opinion that I (not the dog) needed to have a lobotomy, I would definitely question his qualifications.)

Now then. What horrible things might people tell you about me? Ok. Try this one.

Many experts who support zoo testing have doctorate degrees.

Wells does not have any degrees at all.

Who should you believe, Wells or real experts?

That's true. I have no degrees. I did take one extension course in C programming, but besides that I'm entirely self-taught.

What then are my qualifications? Well, my experience. Please consider my standard bio:

Joe Wells analyzed his first computer virus in 1989. At the time, he was the senior partner of Wells Research Information Services. He was developing a set of security programs in x86 assembly language. Soon after, he began working as Research Editor at a medium-sized business magazine, working mainly with statistics and writing research-based articles.

In 1991 he joined Certus International and worked as a developer on three antivirus products-CertusVS, CScan (a prototype virus filter), and Novi. (Novi 2.0 later became Norton Antivirus 3.0).

In 1992 Symantec acquired Certus, and Wells joined Peter Norton Group. There he worked on Norton Antivirus 2.5, and designed the known-virus detection, repair, and information system in Norton Antivirus 3.0.

In 1994 Wells left Symantec and began work with the IBM AntiVirus team at IBM's Thomas J. Watson Research Center. There he worked on IBM AntiVirus and the automated analysis component of IBM's Immune System for CyberSpace. Later he was the founding senior editor of antivirus online, IBM's web-based magazine.

In 1997 Wells joined the CyberSoft team (where he is currently Director of Virus Research).

Wells has worked as a consultant to the antivirus industry, working with many different antivirus companies. He has also done product testing-including comparative reviews for PC World.

Wells is best known for his WildList, which is a list of viruses verified as being in the wild. The WildList, which Wells began in 1993, has helped focus the antivirus industry on the actual virus threat. In addition, it has contributed to standardizing virus naming and product testing. Today the WildList Organization is a multinational network of reporters.

Members of the press have come to recognize WildList Organization as an independent, unbiased source of information. The press often consult Wells for independent verification (or contradiction) of claims made by the antivirus industry. (He is often heard referring to marketing people as the "masters of deception.") He has been widely quoted by the media (e.g. Newsweek, USA Today, CNN, MSNBC, Forbes, Wired, Reuters, ZDTV, CNet, and Various Major Newspapers.) His work has been profiled on the cover of the Los Angeles Times and by ABC Evening News.

So those are my qualifications. Years of hands-on experience. Being in the antivirus industry continually since 1989. Being involved in designing, redesigning, developing and/or independent consulting on over a dozen different antivirus products and the testing of others. Creating the WildList and thereby focussing the entire antivirus industry on the actual virus threat. All that, a love of logic, a little common sense, and a dash of cool skepticism.

If I Only Had a Brain

Marketing illustrations often depict competitors in unflattering ways. They depict them as a distorted, deceptive caricature. Such a misrepresentation might make another company appear unethical, uncaring, unprofessional, or just plain stupid. Sadly, there's an all too common form of reasoning behind this.

The reasoning goes like this: "Making someone else appear inferior, somehow make you appear superior."

I call it "situation comedy logic." I don't have cable (remember?) so correct me if I'm wrong, but it appears to me that there's a pervasive form of "humor" in situation comedies. It involves being funny by verbally cutting someone else to pieces-thereby sending the message that the attacker is superior to the victim (i.e. funnier, smarter, or otherwise cooler than the victim).

I don't know the marketing term for this, but in logic it's called a straw man fallacy.

A straw man is a lot easier to knock down than a real man is. The straw man fallacy is where someone creates a distorted image of his or her opponent. If you oversimplify, twist, distort, or just plain falsify an opponents position they are easier to attack.

Although the represented version is a caricature (a straw man), the critic treats it as equivalent to the original. [47]

Earlier in this paper I pointed out that I had developed a new antivirus engine for CyberSoft. I also stated that the product is implementing the solution to your problem. That means that the product will not detect many of the older zoo viruses. We have the older viruses. We could generate detection for them. We have decided, however, that sacrificing performance to look good in distorted reviews, is not in your best interests.

With that in mind, let's look at how CyberSoft's position might be twisted into a straw man that's easy to attack.

We detect zoo viruses. So do other serious antivirus products. Why doesn't CyberSoft detect them? I'll tell you why. Because their product is no good. They can't detect as many viruses as we can. That's the real reason they claim there's no need to detect zoo viruses. They're making the claim to cover up for an inferior product. Our product is clearly better.

Rather than reply to this fallacious attack, let me tell you about Raven.

Raven is the engine I designed for CyberSoft-the Relational Antivirus Engine (patent pending). It is based on the extraction and analysis of a collection of relational data objects. For antiviral purposes the engine serves two functions.

First, we run the engine on an organized virus collection. For each virus, multiple samples are analyzed. For each sample over 60 different data objects are extracted. The relational data set for the samples of a single virus are processed into a method of detecting that virus. That is, the virus will be detected by a specific subset of the 60 possible objects. That subset is common to all the samples of the virus. That unique combination of relational data is used on your system to detect that virus. So the extraction of relational data needed to detect viruses in a large collection is done automatically.

About one year ago I finished the first prototype of the Raven engine. It used far less than the current 60 objects. I ran it against a large virus zoo. I then used the extracted data to scan the collection. This entire process took only a few minutes. Yet on it's very first run, the engine could accurately detect nearly 10,000 viruses.

Since that time, the engine has become far more capable. For example, one major new feature enabled the easy analysis and detection of polymorphic viruses. In addition, more relational objects have been added. The system also has several verification methods to eliminate potential false alarms.

As you can see, the automated approach to object extraction we've implemented in Raven allows us to easily add new viruses to the product.

Now that you are familiar with Raven, I don't have to reply to the distorted straw man picture presented above. Knowing these facts about Raven, you can reply to it as easily as I can.

We could easily add tens of thousands of viruses to our product, couldn't we? If then we have the ability to add these viruses and we don't, then the reasoning presented in the straw man picture is false, isn't it?

Turning the Tables

In this paper I have presented evidence and argumentation. Based on, those I have drawn, what I believe to be valid conclusions about the problem and the proposed solution. I have also refuted past and potential arguments against the solution and have pointed out example fallacies to make you aware of them.

No I ask you to do something that sounds a bit strange.

I ask you to be skeptical about my claims. Ask yourself if I have been completely fair in my presentation. Remember, I am not an "authority" whose pronouncements are unequivocally true. Have I blundered in this paper and offered any "proof" based on fallacies or unsubstantiated claims? Even if I haven't, have I said things that might be misunderstood or misconstrued?

For example, I hope that I have not fallaciously projected a misrepresentation-a straw man-of the antivirus industry. The industry is not an evil empire (not even the marketing people). I believe that the individual antivirus companies do, more often than not, have the users' best interests at heart. I don't want you to think they are dishonest, mercenary, callous, ignorant or incompetent. That is not my intent. I simply believe that they are, unintentionally, both the cause and victims of a major problem-and that reviewers and users are victims, too.

Likewise, I hope my occasional imagery doesn't mislead you-terms like "a bloated and sluggish antivirus product" that might be construed as "loaded" words. Please be aware that most antivirus scanners are highly optimized and quite efficient. They do protect you. I simply believe that the removal of the unnecessary fat of old zoo viruses would greatly enhance them. I also vehemently believe that this crash diet for scanners is way over due.

I also want to clarify one item because if it is taken wrong it would be a fallacy-a fallacy I haven't mentioned yet. The appeal to ignorance (ad ignorantiam) is defined this way:

The appeal to ignorance consists in arguing that because a claim has not been demonstrated to be true, then it is false. [48]

Don't suppose that I claim the "zoo escape" argument is false, simply because no zoo virus has escaped and proven it true. Rather, I claim it to be false because it is invalid and fallacious, and because probability is overwhelmingly against it. Remember that I would still claim it to be false even if it were shown that "old zoo" viruses had indeed escaped and spread.

So I ask you to be skeptical about the claims I've made. Don't take them at face value. This paper can only be successful if it gives you evidence that you can critically and fairly reason on. I believe that the solution is in your best interest, but my belief does not make it correct. Your evaluation of the information in light of your circumstances and experience is what will or will not make it correct for your needs.


Let's summarize these last two sections on arguments against the solution.

We looked at arguments used to support zoo testing. We dragged them into the light of reason and examined them. What did we see?

First we shined that light (from several angles) on the key argument cited in favor of zoo detection. That light exposed the traditional cornerstone argument for zoo detection, the "zoo escape" argument, to be false. It is specious, invalid, untrue, unsound, and a great way to illustrate several fallacies. Even if we speculate and allow for future examples of real zoo virus escapes, it is buried by the sheer mass of conflicting evidence. Even if we allow it to be redefined, restated, beefed up or watered down, it still retains its mythical nature.

Next, we took steps to minimize any future need to deal with well-meaning responses (as well as vicious attacks) from proponents of zoo detection. We beat to the punch. In their behalf, we attacked me and my solution with some predictable arguments and several foreseeable fallacious assaults.

Then, we predicted and dealt with typical example of marketing deception. We did this by projecting a false image of CyberSoft, much as an unethical competitor might. I then explained the Raven technology (specifically its capability to easily add viruses) to an extent that you could reply to the attack for yourself.

Finally, I asked you to keep your circumstances and experience in mind while evaluating the evidence.

Quoth the Raven

Listen. A Voice Crying Out in the WildList

Sarah Gordon has never been one to shy away from radical ideas. She's a pioneer the field. When I mentioned before that others had preceded me, I had her in mind. In 1992, she was already pointing out the very problem we're here addressing. She was already refuting the "zoo escape" argument.

Back then, before the Internet became so popular, there was an earlier version of the argument. Instead of the Internet, it was predicted viruses would "escape" from Virus Exchange BBS. The "all know virus" mantra was already driving the market to demand high detection rates.

But should these new viruses be addressed? Were they really a threat?

In her 1992 paper, "Circular Time-Line Model for Addressing the Impact of Virus Exchange BBS," Ms Gordon answered by stating the obvious:

With each "announcement" of a "new" virus, [antivirus] product developers need to upgrade their products to satisfy market demand. Researchers' valuable time is wasted analyzing this "junk," which is rarely, if ever, found in the wild. [49]

You go girl!


Sarah Gordon also provides arguments that support my proposed solution. In her paper "The Viability and Cost Effectiveness of an 'In the Wild' virus scanner in a Corporate Environment", she compares two fictitious virus scanners WildScan and AllScan. That paper provides evidence (including mathematical proof) that support the viability of a scanner that would detect only wild viruses. She concludes her comparison by stating:

"Thus, we can see that although WildScan detects less than 4% of the viruses detected by AllScan, the actual difference in terms of protection are very small indeed - over 99% of all incidents involve a virus which WildScan is capable of detecting!"

"This result has implications for those involved in the testing and certification of anti-virus software. In particular it means that tests against a large virus collection containing all known viruses actually tells the user relatively little: the most important criterion is the ability of the product to detect those viruses which are known to be in the wild." [50]

After citing her paper in "Reality Check," I posed the question:

Would you buy a virus product off the shelf that claimed to detect a mere 200 viruses? How about if those 200 were all known wild viruses? How about if all the research and development time and money went into handling those 200 extremely well? How about if updates focused exclusively on doing just as good of a job on any new wild viruses? [51]

But did my paper in 1996 have any impact? No. Not really.

This time however, my presentation has solid substantiation. Since you've reached this point you've examined that substance. You've examined and pondered the evidence both against zoo detection and for eliminating it. You've considered the potential responses to this paper and have been forewarned about potential, fallacious attacks.

I'm not presenting it this time as a conference paper. Instead, I am presenting it as white paper that contains something else substantial-an actual, implemented solution to the problem.

Wave of the Future

As I explained earlier, over the past year I've developed the Raven engine for CyberSoft's VFind product and at the same time I've implemented Raven for Windows as Wave antivirus. The VFind and Wave Antivirus tools with Raven give substance to my argument. [52] Wave puts into practice what I've been preaching. Here is an antivirus product that is not meant to handle tens of thousands of impotent DOS zoo viruses. It is not meant to win zoo reviews in magazines. To the contrary, It was specifically designed to do what's best for you. It was designed and built from the ground up to efficiently protect you from today's actual virus problem.

Imagine that! A product that protects you, but without an unnecessary glut of useless old DOS zoo signatures. That means it's small. That means it's fast. That means it's efficient. And guess what. Updates don't take half an hour to download.

Now imagine this: What happens as more users become aware of the complete irrelevance of zoo testing? What happens as they learn of the malignant effect of zoo detection on antivirus products? What happens as they recognize "Dozens of new viruses per day!" and "Detects 50,000 viruses!" as the marketing deceptions they are?

Maybe the industry won't take note. Maybe developers and reviewers will still perpetuate the fallacy that high zoo diction rates are a good thing. But, maybe Wave will gain of an ever-increasing competitive edge. Maybe other products may continue to grow by tens of thousands of viruses, while Wave simply keeps pace with the growth of the actual virus threat.

After all, why should we waste time maintaining a fallacious image? We'd rather spend our time protecting you from real viruses in the real world.

At the same time, you must bear in mind that Wave implementation of the solution is not exactly the WildScan presented in Sarah Gordon's paper. Wave does however have a WildList Plus option that limits DOS file-infecting viruses to those on the WildList. Using this option turns off detection for old DOS zoo viruses, but it does not effect Wave's detection of boot and macro viruses.

Wave's approach to detection is actually more like the approach mentioned in another paper Sarah Gordon co-authored with Dr. Richard Ford in 1995. In that paper they speak of "meeting the real threat" by evaluators testing a product's performance as follows.

"Product performance against the threat [should be measured] not by running and maintaining a large collection of all viruses, but by testing extensively against those viruses which are known to be in the wild, and also against the known threat." [53]

See that? This suggested approach goes beyond detecting viruses currently in the wild. It adds detection for "threat types." What this means is that new viruses which appear can be detected by recognizing a type.

WildList Plus

This is similar to the approach taken in Wave. A primary functionality of Wave involves its ability to detect and repair changes made by unknown viruses. This, in addition to levels of generic detection for specific virus types, moves Wave away from being a WildList-only scanner.

It is a "WildList plus" scanner. [54]

So Wave is not Ms Gordon's hypothetical WildScan. By default, Wave still detects a limited number of older zoo viruses, which do constitute minor threat types. But we highly recommend that the user resets it to scan primarily for WildList viruses. We also plan for future releases to make WildList viruses the default and actually reduce the number of zoo viruses we currently detect.

Wave's implementation of the solution also detects other major threat types by looking for a large number of macro viruses that are not (yet) in the wild. Unlike the old zoo viruses, some of these do represent a potential threat-far more of a threat than the old zoo viruses, but nowhere near the threat of wild viruses. Still however, Wave will not score highly against a macro virus test suite containing lots of obscure or intended macro viruses.

Since we won't spend precious time on increasing zoo detection, we plan to spend more time on actual and potential threat types. Doing so we hope to continue fine tuning Wave to maintain an optimal balance between performance and probability-based threat type response.

Ok, let's be realistic. Currently, we really don't expect Wave to win any reviews, especially those that emphasize zoo detection test. In fact, we hope it loses those. Because the more that Wave loses in zoo tests, the more you gain in performance.

Reflect for a moment. How strange would that last paragraph sound to someone unfamiliar with the evidence? They would likely still follow the popular wisdom and think high zoo detection rate is desirable. To you, however, it sounds perfectly reasonable-reasonable because all the evidence demonstrates that a high zoo detection is extremely undesirable.

Will you ever again believe the lie about high zoo detection score? Quoth the Raven, "Nevermore."

Don't worry, I haven't forgotten the statement I made earlier.

This paper is not intended to sell you on VFind or Wave. [55] Rather, my intent here is to provide evidence to support two claims-that the problem exists, and that there is a solution.

Have I succeeded in my intent? What evidence has been brought to light?

Out of Darkness

What Have We Seen?

This paper proposes a radical new direction in virus scanning. It proposes change. A very real problem exists, which has a major negative impact on users. For that reason, the change is needed. Furthermore, the proposed change, this radical new direction, represents a practical, real-world solution to the problem.

The problem is that antivirus products fail to offer optimal protection and performance, because they needlessly detect thousands of zoo viruses, which are not a real-world threat. In analyzing the problem, it has been demonstrated that the problem revolves around two closely related lies.

  1. Virus scanners must detect all known viruses.
  2. The higher product's zoo detection rate is, the better the product is.

Admittedly, the first lie has been addressed by antivirus product developers and reviewers, but only to a small degree and in a perfunctory way. Yet the second lie, which is founded upon the first, has not been addressed. (Even today, antivirus product ads, boxes, and web sites flagrantly mislead users with statements about detecting all known viruses, claims of being better based on zoo detection rates, and insinuating that users are in danger from large numbers of new viruses per day, week, or year.) This clearly demonstrates that these lies drive the antivirus industry-specifically: product marketing, product reviews, and user demand.

It therefore follows that these two lies are an integral part of the very foundation upon which current virus scanning technology has been built. From the very beginning, those lies were imperfections in the foundation of current technology-deformities and cracks-and those lies have grown over the years. They have weakened that foundation more and more.

This weakening is real. It takes on substance when products add detection for zoo viruses-specifically the older, file-infecting viruses that survive and spread only under DOS. Yet such detection is unnecessary and even highly undesirable. It has been demonstrated, by evidence and by reasoning, that these viruses do not constitute a real-world threat. In addition, we saw additional evidence, which proves that these viruses have been moving steadily toward extinction since the early 1990's. So, in the light of evidence and reason, any prophecy of doom involving escaped zoo viruses is clearly transparent-as are the motives of the would-be prophet.

Hence, the solution to the problem is to remove these unnecessary viruses from the mix. We must build upon a radical new foundation, a solid foundation-based on facts and reason instead of fallacies and marketing lies. By doing so, we can create new antivirus scanning technologies that can provide users with true, optimum protection.

Upon such a foundation, the Raven engine was conceived, designed and developed. It demonstrates that the solution can indeed be implemented. True, it is a radical change, it is a transition product, and it is thereby risky, but we did it this way for a reason-the same reason I wrote this paper.

We did it to benefit users-to benefit you.


So, how do you benefit? What does all this mean for you in practical terms?

Well, three primary benefits come to mind-knowledge, understanding, and wisdom. I don't mean in the philosophical sense-I mean in the practical sense.

Practical knowledge involves taking in information that has some useful application.

This paper has been replete with useful information. I have given you facts pertaining to a very real problem, which you have to deal with on a regular basis. You have seen what it is, where it came from, when it started, who has perpetuated it, how it has worsened, and what the solution involves. Therefore, you know all about the problem and the solution.

Practical understanding involves putting all that useful information into context, reasoning on it, and discerning what it all really means.

This paper has organized the information about the problem and the solution. It has presented it to you in the form of logical arguments. You have seen why the problem exists, why it is a problem, why it affects you, why it needs to be resolved, and why the solution is practical. Therefore, you understand the scope and nature of both the problem and the solution.

Practical wisdom involves taking appropriate action based on what you know and understand.

This paper does not provide practical wisdom. That's not my job. I've done my part. To benefit from practical wisdom requires action on your part. I gave you the information and explained it, but you're the one who has to do something about it. Therefore, you must be the one who demonstrates practical wisdom by doing something with all that knowledge and understanding.

Think about it this way. You may know you're standing on a train track. You may even understand that the onrushing train will kill you. But if you don't follow the course of practical wisdom pretty soon, you'll be splattered all over the 6 o'clock news.

Now, keeping that rather graphic motivational image in mind, we'll look at some wise actions might consider taking.

Go ahead. Be radical.

For by doing so, you will have a direct say in how the antivirus industry should be serving you. This is how we can, and will, change the entire course of the antivirus industry. We've done it before and now we must do it again. Because, in truth, doing so is in everyone's best interest.


Peter V. Radatti

President & Founder CyberSoft, Inc, Unix Virus Researcher
[email protected]

In order to understand why my opinion is valid, you need to understand a little of my background. I invented the first Unix antivirus scanner, heterogeneous antivirus scanning, the first cryptographic integrity system coupled with a virus scanner, the first network based virus scanner, the first fully featured and implemented Virus Description Language, the first antivirus scanner to use pattern modeling and several other important developments in the field. I worked 13 years in the space industry, was an international technical columnist and was a guest speaker at technical conferences hundreds of times. I founded CyberSoft in 1988 with a charter to "Protect The Customer". This paper is consistent with that charter.

Basically, what we are discussing is what design constraints will rule in a product. Will protecting the customer and good engineering practices rule or will marketing rule. The problem with the rule of marketing is that it is only concerned with the veneer. Depth is not necessary and often must be sacrificed as an inconvenience for appearance. The marketing war that started early in the antivirus industry centered on the number of viruses a product could scan for is a perfect example of the compromise between good engineering, which protects the customer, and the marketing that sells products. Every worthless virus scan code added to a product degrades its ability to protect customers. In fact, many customers viewed this in a positive light because when the antivirus product falsely identified a file as containing a virus the customer feels protected. As the number of virus scan codes for viruses a customer would never experience increased, the better the perception of the product but the worse it actually was. CyberSoft is taking a large risk in releasing a product (Wave Antivirus) that uses the WildList as the primary set of scan codes. It will protect the customer better than a zoo of scan codes but customers may not understand that and sales can suffer. To compromise slightly, we do not use a pure WildList but a mix of WildList and zoo viruses that we feel provide an optimal solution. It is important that the public understands and accepts the difference between good engineering and good marketing. This is because the virus problem is changing. Macro based viruses and worms like the Melissa Virus, which can attack and span the globe within hours, means that good engineering will become critical in the near future. Please choose one. Good engineering or good marketing. They are about to become mutually exclusive.


  1. The WildList is a monthly list of viruses verified in the wild-that is, actually known to be spreading in the real world. The first official release came out in November of 1993. The December 1993 edition of Virus Bulletin, reprinted the WildList and quite correctly described it by stating ""Rather than attempting to measure virus prevalence, the list is designed to show exactly which viruses are actually spreading. The WildList is available from
  2. The Greek text is from 2 Corinthians 4:6, according to the 27th edition, third printing of 1995, of Novum Testamentum Graece, by Barbara and Kurt Aland, et al., Gesamtherstellung Biblia-Druck, Stuttgart. The translation is literal.
  3. Wells, Joe. "Reality Check: Stalking the Wild Virus", Proceedings of the 1996 International Virus Prevention Conference. Arlington, VA. April 1-2, 1996. This conference was sponsored by the NCSA-which is now called the ICSA (International Computer Security Association). Information on the ICSA is available from
  4. This information is taken from a review of Norton AntiVirus in the January 1, 1991 issue of Virus Bulletin, pp. 25-26, which notes, "This review uses the latest version of Norton AntiVirus which has files dated as late as 12th December, 1990. The master disk displays the serial number 1.0.0, so presumably this is the first official release of the Norton AntiVirus." [Italics theirs.]
  5. The information tracing this escalation in the numbers war was taken from product boxes in my personal archive of antivirus products.
  6. Wells, Joe. "Reality Check," p. 2.
  7. Wells, Joe. "Reality Check," p.1. In the introduction of the paper I posed three questions to the audience in this form:


    Has the antivirus industry done users a major disservice?

    Is the antivirus industry trapped in a mire of its own making?

    Can the antivirus industry handle what's ahead?

    In the paper, my replies were yes, yes, and no.

  8. In logical argumentation and refutation there are three main phases: Proposition, definition, and analysis. In the definition phase of such logical arguments, a reportive definition describes the was a term is actually used, while a stipulative definition is "a statement a rule that will be followed in using the word defined," according to Conway and Munson (their book is cited below).
  9. Kephart, J. & White, S. "Measuring and Modeling Computer Virus Prevalence", IEEE Computer Society, 1993.
  10. Wells, Joe. "PC Viruses in the Wild - February 10,1996" (the WildList). 1996.
  11. Gryaznov, Dmitry. S&S International's unpublished virus report for 1995. 1996.
  12. Chess, D. IBM's virus report for 1995, posted for use in the WildList. 1996.
  13. Wells, Joe. "Reality Check," pp. 3-4
  14. White, Steve R., Kephart, Jeffrey O., and Chess, David M. "The Changing Ecology of Computer Viruses," Proceedings of the Sixth International Virus Bulletin Conference, Brighton, UK. September 19-20, 1996, pp. 198-9.
  15. White, et al. "The Changing Ecology of Computer Viruses," pp. 200-1.
  16. Palfrey, Megan, "Fighting Fire with Fire." Virus Bulletin, February 1994.
  17. Ford, Richard. "Certification of Anti-Virus Software: Real World Trends and Techniques," Proceedings of the Sixth International Virus Bulletin Conference, Brighton, UK. September 19-20, 1996, (Late submissions section for day one.) p. iv. Pages ii - xi covers the section titled "Four Myths of Anti-Virus Software Evaluation." This quote is from Myth 2.
  18. Ford, Richard. "Certification of Anti-Virus Software," p. iv. This quote is from Myth 3.
  19. Ford, Richard. "Certification of Anti-Virus Software," p. vi.
  20. See note 8 above.
  21. Unpublished. Private email.
  22. Wells, Joe. "Reality Check," p. 10.
  23. Wells, Joe. "Reality Check," p. 8.
  24. Orshesky, Christine, "Wacky Widgets, Wacky Costs: False Positives," Virus Bulletin, May 1996.
  25. Lambert, Mike, "Circular Extended Partitions: Round and Round with DOS," Virus Bulletin, September 1995.
  26. Conway, David A. and Munson, Ronald. The Elements of Reasoning. 2nd edition, Wadsworth Publishing, Belmont CA, 1997. ISBN 0-534-51672-6. p. 5. I would not hesitate to recommend this book to anyone interested in the application of logic in argumentation. It focuses more on understanding and applying reason is daily dealings than it does on theory.
  27. Allen, Colin and Hand, Michael, Logic Primer. The MIT Press. Cambridge, 1992, 3rd printing of 1996. ISBN 0-262-51065-0. p. 1. Unlike Conway and Munson, this book is more of a syllabus. It is quite succinct, but offers many clear definitions of terms.
  28. I stand to be corrected, but I recall hearing that, in Sir Arthur Conan Doyle fiction, Sherlock Holmes actually never said "Elementary, my dear Watson." However, I do seem to recall reading Holmes impatiently correction, "No. Elementary." when Watson exclaimed something to be extraordinary.
  29. Damer, T. Edward, Attacking Faulty Reasoning, Wadsworth Publishing, Belmont CA, 1997. ISBN 0-534-21750-8. p. 16.
  30. Allan and Hand, p. 1.
  31. Conway and Munson, p. 34.
  32. Allan and Hand, p. 2.
  33. For more detail, see Conway and Munson, Chapter 8, "Errors in Reasoning: Fallacies."
  34. Conway and Munson, p. 133.
  35. Conway and Munson, p. 131.
  36. Deductive reasoning is also called a priori. It involves deriving a conclusion from premises that are self-evident. Non-deductive reasoning is also called inductive and a posteriori. It involves deriving a conclusion based on observations. For more detail, see Conway and Munson, Chapter 3, "Evaluating Arguments."
  37. Conway and Munson, p. 34.
  38. Conway and Munson, p. 139.
  39. I here present a single fallacy in two ways. The actual name is hasty generalization. Like any other fallacy, the selective use of evidence may or may not be committed intentionally. I here call it selective use of evidence to represent it as intentional pseudo-science, where someone goes out of his or her way to prove a pet theory. But the misrepresentation of evidence may be done innocently. It may simply be due to a lack of thorough analysis. This later case I refer to by the correct name of hasty generalization.
  40. A classic example of proof based on the generalization of limited evidence exists in a totally unrelated field-the field of exegetical grammar as applied to first-century, colloquial Greek. In 1933, E. C. Colwell wrote a article in the Journal of Biblical Literature in which he sought to establish a rule of Greek grammar to prove a controversial Bible verse should be translated in a certain way. He based the rule on two other texts in the same Bible book with similar syntax. His work was thereafter often cited as "Colwell's rule." Forty years later, another article appeared in the 1973 Journal of Biblical Literature, by Philip Harner. Unlike Colwell, Harner examined every case of the syntactical form in that Bible book (he documents 54 examples). He showed that the vast majority of cases represent consistent usage unlike the two cases Colwell based his rule on. Harner thereby demonstrated the two verses cited as evidence by Colwell are exceptions to the normal usage in that Bible book. Even today however, Colwell's Rule is still cited in Greek language, and other, reference works.
  41. Conway and Munson, p. 129.
  42. Sagan, Carl, The Demon Haunted World: Science as a Candle in the Dark, Ballantine Books, New York, 1996, p.210. ISBN 0-345-40946-9 Sagan makes this statement, in the chapter entitled "The Fine Art of Baloney Detection" which discusses many of the fallacies we are discussing here. On the subject of "authorities" he also states (on page 28), "One of the great commandments of science is, 'Mistrust arguments from authority.' Authorities must prove their contentions, like everybody else." Interestingly, Sagan's chapter on detecting baloney is adapted and expanded in another book that turns the baloney detector on Sagan's own book-something I believe Sagan would have happily approved of. (Phillip E. Johnson, Defeating Darwinism by Opening Minds, InterVarsity Press, 1997, pp. 37-52. ISBN 0-8308-1360-8)
  43. Newton, Isaac, Philosophiae Naturalis Principia Mathematica, Prometheus Books, NY ISBN 0-87975-980-1.
  44. Gödel, Kurt, "On Formally Undecidable Principles of Principia Mathematica and Related Systems," 1931. In that paper Gödel supplied arguments that proved some mathematical statements in consistent systems can be true, but cannot be proven to be true. See also "Gödel and the Limits of Logic" by John W. Dawson Jr. in the June 1999 issue of Scientific American (pages 76-81). That article states that Gödel believed his incompleteness theorems "justified the role of intuition in mathematical research." Consider too, Gödel's resolution of Georg Cantor's continuum hypothesis was to prove that "it is impossible to prove the continuum hypothesis, and it is also impossible to disprove it." Daniel J. Velleman, How to Prove it. A Structured Approach. Cambridge University Press, reprint of 1996, p. 302.
  45. Conway and Munson, pp. 127 and 128.
  46. Conway and Munson, p. 136.
  47. Conway and Munson, p. 137.
  48. Conway and Munson, p. 124-5.
  49. Gordon, Sarah. "Circular Time-Line Model for Addressing the Impact of Virus Exchange BBS,"
  50. Proceedings of the 1992 Information Systems Security Management Conference. ACM/DPMA. New York, 1992.
  51. Gordon, Sarah, "The Viability and Cost Effectiveness of an 'In the Wild' virus scanner in a Corporate Environment," 1995
  52. Wells, Joe, "Reality Check", p. 10.
  53. Although Raven has been implemented in two products, to make this easier to read, I will be referring only to the Wave product in this context. Simply bear in mind that things said about Wave also apply to VFind with Raven.
  54. Ford, Richard and Gordon, Sarah, "Real World Anti-Virus Product Reviews and Evaluation," Proceedings of the 1995 International Virus Prevention Conference. Washington, DC. April 10-11, 1995, p. 8.
  55. It is necessary for a product to detect more that just those viruses currently on the most recent WildList. This is because viruses drop off the list, but that rarely, if ever, means the virus has actually gone extinct. Often a virus is still spreading in the wild after it has fallen off the list. The reason this can occur is simple. The WildList reports viruses reported by users to antivirus companies. Users may not report viruses to antivirus companies that their product handles easily. The WildList then, rather than being a all-inclusive list of viruses in the wild, is a list of viruses that impact users.
  56. WildList plus we mean that viruses on the current WildList, and viruses that were legitimately on the WildList in the past. I say "legitimately" because some viruses have appeared on the WildList by mistake. In addition to past and current WildList viruses, other viruses representing real threat types are also detected.
  57. If you would like more information on the solution, as it's implemented in Wave, see my paper "Why Wave?" which is available from CyberSoft, Inc.
[Back to index] [Comments]
By accessing, viewing, downloading or otherwise using this content you agree to be bound by the Terms of Use! aka