VX Heaven

Library Collection Sources Engines Constructors Simulators Utilities Links Forum

Viral Technology Past, Present, Future


[Back to index] [Comments]


Since the birth of the computer virus over a decade ago, there has been many additions to the reseviour of available technologies to incorporate into a virus. However, the effectiveness of each technology is no longer questioned, a super virus is assumed to have basically all the technologies known to man.

This should not be however. All technologies have drawbacks, wether it be in space or speed, or even endangerment of the well-being of the virus itself. To help correct this, this paper hopes to cover most (if not all) current and old technologies in general, used for viruses. The original reasons for the technological development will be given, along with a short summary of what the technology is and the problems in deciding wether it is still usefull in todays world.

I also hope to cover a few of the main technologies used in AV software, as well as providing some insight as to where viral technology is heading for the future, and the same goes for AV technology too, because they are both inextricably linked... as one progresses the other progresses in a similar manner.


Let us begin with the simple technology of tunneling :) Tunneling was originally created to bypass a class of virus detection products known as behaviour blockers (one you would find most commonly mentioned in the old school virus magazines would be FluShot+).

Behaviour blockers work by staying resident and hooking into the major interrupt vectors such as i13 and i21. Then, the behaviour blocker monitors calls to these interrupts to check for strange sequences (ie: open file, move to end of file, write, move to beginning of file, write, close... this may indicate a virus infection in progress, or otherwise just the read/write access opening of an executable file), for just totally unjust functions (direct disk writes from an unauthorized program).

In todays world however, behaviour blockers are not so common... of course there are stupid programs which scan files for viruses as they load up, however this is mainly what they are restricted to, because users hate false alarms (the only way to avoid them is to authenticate certain files... and users don't like spending time protecting their system either). Of course, glaring exceptions such as VSAFE and TBAV come to mind, however their general usage would be little. Remember however that VSAFE is simply the resident monitor for Central Point Anti Virus, which is most probably a widely used product.

So lets say you do use tunneling... what are the downsides. The obvious points are that it increases the size of your virus, and may also slow it down on initial loadup. Apart from that, tunneling still doesn't work too often (depending on what form you use), usage of tunneled interrupt vectors MAY increase virus size even more, and if they are incorrect, could crash your virus. Also, if your tunneling method can be caught out (such as single step tunnelers), then you may have CPU compatability or AV detection problems to deal with as well.

To face the facts however, in DOS at least, behaviour blockers are of no major threat since they are so little often used. Also, tunneling can be a dangerous practice... for its various reasons. So is tunneling still a usefull weapon for your virus to carry? I think not.


Anti-tunneling has been around just about as long as tunneling itself... and was originally used by the AV to defend against virus tunneling attacks. Even then, it is still barely used, even less so than tunneling in AV. Too bad for the AV however, that anti-tunneling has been turned against them... viruses can now use anti-tunneling code to prevent AV from EVER being able to tunnel past virus stealth, and also to stop other virus tunnelers from interfering with interrupt hooking mechanisms.

How many AV tunnel? Not many at all... however an outstanding example of a stupid tunneler is in F-PROT's VIRSTOP resident file monitor (it is a file scan on access TSR, rather than a behaviour blocker). This silly program tries to use single step tunneling to go through the interrupt chain in an effort to bypass virus stealth and also maybe work oout what version of DOS is being used so as to be compatible as possible. However, if the tunnel fails, then VIRSTOP will refuse to load (however there are command line options to disable this feature). The fact is, however, that if your virus uses anti-tunneling, VIRSTOP will not load into memory and most likely the user will just ignore the error message ;)

So is anti-tunneling a good idea? The answer is yes, and no. While in some cases it is very important to stop other viruses from tunneling past your hooked interrupt vector, and very usefull for stopping the very popular VIRSTOP TSR (and possibly other less popular AVs), in other cases, since something such as single-step tunneling will produce usefull results on a clean system rather than returning garbage such as when an anti-tunneler is present... this fact could be used against you by the AV. In this case, I would say that anti tunneling is helpfull only if your virus uses stealth mechanisms that could go awry if another virus accesses files by bypassing your interrupt hook.


Stealth has been used in many forms over the past decade, and was initially used to fool integrity checkers from noticing changes in infected files (back then, integrity checkers were more reliable and more widely used than the meagre AV scanners... and they still are, however not much good for cleaning files sometimes). There are many forms of stealth, and each has major drawbacks which we will be discussing.

File Stealth

Simple file stealth was probably the first development in the stealth techniques, and also the easiest to discuss. Size stealth was created back in the days when DOS was the most widely used operating system, and people often used Xtree-Gold or Norton-Commander, or even just plain DOS itself, to regularly peruse files.

This was also the time when viruses were all over the news and people thought that by inspecting the sizes of files they could detect a virus on their system (and generally, back then, they only used to check COMMAND.COM, hence why it became almost imperitive to skip COMMAND.COM infection, a feature still present in the viruses of today).

However, size stealth did have its problems. Some programs, such as PKZIP, would incorrectly compress files when it was active... and programs such as CHKDSK would go crazy claiming errors on the hard disk if file stealth was active. This warning is possible the most detrimental to the life of the virus, as we all know how users react to instable systems (they blame it all on a virus, except in this case it is the fault of the virus).

So, obviously, file stealth was not very compatible with programs, however it is essential in hiding the presence of a virus is other forms of file stealth are used. Is it usefull? Not on its own, it is way too much trouble trying to cloak a filesize which people will never notice increasing, and for all the problems it creates... well you get the picture.

Before we skip onto the next form of stealth, it is usefull to say that at the moment, the old file-size stealth does not even WORK under Win95... however nobody has discovered why yet :) Also, it is of limited use, compared with just not infecting tiny files (also an anti-bait technique), or using special methods to infect files without increasing file size (infecting data spaces, infecting the header, compressing the file before infecting it, etc).

Sector stealth

This probably was the first boot sector virus form of stealth, and probably came before file-size stealth, but oh well ;) Sector level stealth cloaks the presence of the virus in the boot sector of hard and floppy disks. It is a shame however, that this is of little use, if an AV program decides to read directly from the ports you are screwed :)

However, these port accesses also issue an i76 in the process, if you are smart, you can cloak your presence even in these port cases! However, be warned that this interrupt is called AFTER disk writes... you may go into an infinite loop if you are not carefull!

Also, sector level stealth has other problems. It provides a host of problems in interfacing with environments such as Windows and Win95, however both of these can be alleviated with i76 stealth, and hooking into i13 in the correct way. i13 boot stealth may not work under Linux and other 32-bit operating systems, however i76 stealth should (however it may also cause problems ;))

Is sector stealth still usefull? The verdict is definately yes!


This is possibly the first type of full-file stealth. It simply works by cleaning infected files whenever they are opened... and reinfecting them on close. Or, in the case of the Frodo virus, possibly one of the first to use this technique, NOT re-infect files, in which case the whole system can be copied onto another hard disk without any infections at all ;)

Disinfect on read stealth does not work on floppy disks, and will cause major error messages if the correct i24 hook is not in place. Also, certain configurations of network do not allow write permissions so this may also cause problems and error messages. This form of stealth is also very slow, especially when floppy disks are being scanned! Finally, disinfect on read stealth makes disk space fluctuate badly, in which case AV such as Invircible may complain. This can be fixed with disk space stealth (however, under systems such as Win95 and Windows, disk space is supposed to fluctuate!).

4202 and SFT stealth

Both of these forms of stealth are what is referred to as read-redirect stealth... in which files are not cleaned on access however the read attempts are masked, as well as file sizes, to show a clean file. If however, the file is opened in read/write mode and the file is written to, the file is cleaned, written to, and maybe re-infected at a later time. Usually, the file may also be cleaned on execution under a debugger.

The reason there are 2 forms of this stealth is the way in which the true size of the file is masked from the reading program. In 4202 stealth, which the majority of viruses use, access to the end of the file is restricted by the virus itself... any attempt to move beyond the end of the virtual file, will be blocked by the virus. SFT stealth however, changes the length of the file in the SFT tables of DOS. DOS then restricts access past the end of infected files, and the virus only needs to mask the beginning infected bytes.

SFT stealth causes major problems on networks, and between operating systems which are clones of DOS, because SFTs are an undocumented structure and no normal program needs to, or does, access them. Needless to say, both forms of stealth may cause problems with disk checkers as well, however this is really application independant. Some compressers will archive files in an uninfected forms if read-stealth is not turned off whenever the archiver loads, which is a problem many viruses have a problem with.

Is file stealth usefull? Obviously it is, but it comes with a hefty price on compatability. Remember, file stealth is only usefull against integrity checkers, which there are many other 'clean' ways to prevent. Although one could argue that the AV scanners cannot scan inside succesfully stealthed files, some people are smart enough to boot from a clean write-protected disk (although many aren't) before scanning. Also, if your virus includes other defenses such as good polymorphism, etc, then the AV will not be able to detect your virus anyway (at least for a while).

General problems with stealth

There are a few general problems with stealth. First of all, there is the problem of viruses or anti-viruses tunneling by your stealth mechanisms, which can REALLY screw things up if you are using SFT read stealth ;) Also, an AV program can check for discrepancies between reported DOS information and the real information as gleaned from the FAT tables of the hard disk. Some AV such as TBAV read files and FAT tables from disk... hence making stealth useless (unless you disable this ability of TBAV by including certain... extra command line paramters).

Stealth has its uses, but is no replacement for proper virus protections such as polymorphism, metamorphism, and genetic coding.

Encryption, Polymorphism, Metamorphism

Encryption, the base protective mechanism of the virus, probably even before stealth was created, encryption has been used by viruses from the very beginning (well not really in the beginning, but since it was created close to the beginning). True as it is that AV will create small signatures for your virus from your decryptors, without encryption, other fruits of the virus such as polymorphism would not have been developed.

Increased scanning time for just the strings of even silly viruses, and problems with exact identification as the number of "unique" hand-made decryptor strings decreases, and problems with finding the string of your decryptor in commercial software, are all problems the AV must face, and without encryption, the AV would not have the problems of viral glut which they have today.

Polymorphism on the other hand, is treated like the saviour and main weapon of the virus against AV signature scanners. However, as the AV develop generic decryption (aka emulation), the popularity of polymorphism is far less than it deserves, and it seems that polymorphism is living on borrowed time. While it is true that polymorphism creates problems even for emulation systems, in that they slow down incredibly, and are confused by much garbage code thinking it is real (such as many DOS function calls), most polymorphic decryptors set of major heuristical flags in the smarter AV due to the garbage of the code which they create. And this is rightly so, a decryptor cannot look like real code, because all the code simply obfuscates the real 6 line assembly decryptor, no matter how many register swaps and junk codes you insert.

As more and more people notice this fact, they are turning to the newest viral technology, metamorphism, basically what polymorphism was supposed to be with begin with (even though it is still somewhat prone to emulation, but generic detection must be implemented as well, otherwise emulation is useless as no decryptor is present) ;) Metamorphism does away with encryption, and completely rewrites the entire code of the virus from scratch with each new generation.

This means for the AV, that, a conservative figure of 20 scan strings per metamorphic virus would be needed, and even then some forms of metamorphism (could be) progressive, and will NOT be identified by scan strings properly, ever!

So how would a simple metamorphic virus work? A proper one would require the virus to go step-by-step through its own assembler code and construct new code of equal ability from its own framework. This requires intimate opcode knowledge, and most probably a degree in polymorphism techniques :) Most likely, this opcode changing will be additive... the garbage code added in one generation (ie: register swapping) will be mutated again and again each generation, hence the virus grows ever bigger. This causes problems for the virus, as it must reduce its size every once in a while, and this could be by internally-encrypting the original copy of the virus itself, after reacing a certain size the metamorphism module is rehashed to work off of the original virus, and voila, a new strain og the virus is set forth unto the world.

Since examples of metamorphism are rare, and very few prototypes even exist, the forms of metamorphism are sure to grow and change quickly as more people embrace the metamorphic technology. Unfortunately, metamorphism, in its current form, requires an understand of deep opcode and polymorphism garbage (register swapping) code generation, something not too many people have ;) The number of people creating metamorphic generators will be much smaller than those creating polymorphic generators due to the complexity. This complexity will also mean LESS metamorphic engines per person, however it is possible for a metamorphic engine to create a number of scan strings limited only by a) the creator and b) disk and memory space :)

If enough metamorphic generators were created, its possible the AV naming and detection methodologies would have to be completely restructured, for purely generic viral detection and cleaning, as the thousands of viruses will be just too much for the regular AV scanner, and naming viruses and deciding what family they are from will be nearly impossible! No longer could viruses be named in families as each generation of a metamorphic virus is bigger and more complex than the last, of a totally different code...

Ever since the beginning, mutation of the virus has been the driving force behind technological advancement. Wether it comes in the form of polymorphism, or metamorphism, each tries to vary (mutate) the virus into other unscannable forms. It is easy to predict that this course of development will continue.

So is this the saviour of the virus for all time? That is really to be seen, however surely, if it catches on, the game will be changed forever, and maybe integrity checkers will come to rule the virus world once again ;) What a blast that would be...

Anti-Debugging, Anti-Heuristics

These two techniques are still very much needed by the virus of today... anti-debugging is needed to combat effectiveness of AV emulation, and the dissassembly by the AV of the virus itself. Unfortunately, anti-debugging structures can also be flagged heuristically :(

Enters anti-heuristics! Obviously, if complex generic virus identification systems were to be refined, anti-heuristics may be the only thing which keeps viruses alive. However, the same could be said for strange and uncommon infection techniques, which may be so complex as to avoid generic detection.

Easy enough to say, forms of metamorphism do away with the need of anti heuristic structures, as the code generated by the metamorphic engine acts as a junk generator to throw all but the most complete AV emulation systems off the track. Anti debugging may also be less helpfull under metamorphic engines, because some forms of it require specific code sequences to do their work.

But are they usefull, when metamorphism is not used? The answer is both yes and no. The majority of anti-debugging tricks will be worked out by the AV, and anti-heuristics quickly become useless after successive AV updates, ESPECIALLY under emulation systems. Under things like DSAV and TBAV however, anti-heuristics can be a very usefull feature.

Midfile infection

Midfile infection has been around for only a few years, and comes in two forms. The first form is proper midfile infection... where the virus looks into the file to be infected, traces through the code of the program (there are many methods available to do this, from actual execution in single step mode, to emulation, to code tracing, etc), and then putting down one or more entrypoints into the virus at various code boundaries.

The second method, dubbed the Island method by Sepultura [IR/G], is done by randomly selecting a few random portions of the file to be infected, and saving the data in that area to a data area within the virus. These small peices of code in the file are then filled with polymorphic jump code, each one which jumps from one junk island to another. The last jump jumps to the beginning of the virus which is also placed somewhere in the middle of the executable file.

When the file loads up in the island method, the junk islands jump from one to another, through various parts in memory, to the decryptor of the virus, hidden deep within the program code. Although scan strings may be possible if no polymorphism is present in the virus decryptor, this means that the AV program must scan -ALL- of each file in order to discover the virus, as generally the AV increase speed by only scanning the heads and tails of the programs.

The island method can also be improved, as was done in the One-Half virus, where each junk island contains part of the decryption code. As each junk island jumps to another, the virus is slowly decrypted in memory. This means that not only is the decryptor polymorphic, but that emulation systems slow down considerably with all the jumping from island to island repeatedly as the virus decrypts. The decryptor is also spread over a giant area, meaning that detection of the virus is nearly impossible.

Unfortunately, there are problems with both methods of infection. First of all, they both work well on COM files, however not so well on the common EXE file format. Also, they are almost impossible to create for EXE files such as those under Win95 due to their internal structure, or at least, this seems to be the case. EXE infection isn't too hard with the island method, but almost impossible with various methods of tracing in proper midfile infection.

Whilst the normal midfile infection method is less prone to triggering polymorphic code detectors, due to using just the program code itself to hide the virus entrypoint, they can crash randomly on certain self-modifying files, and do not work well at all on files with internal compression.

So how good is midfile infection? Under DOS, the island method of midfile infection is almost impossible to beat, and at the very least, of major threat to the AV. The classical midfile infection technique however, isn't as good but still has its advantages over normal infection if done correctly. It is a shame however, that this method may die out with Win95.

Evolving viruses

Evolving viruses are still very rare and so therefore, hard to comment on. There is however, a new form of evolution, which is sure to make a giant impact on the virus scene, and that is, of evolving polymorphism. For a discussion on what evolving polymorphism is and how it works, see my document titled 'Evolving Polymorphism - Why,What, How'.

Is the size and complexity increase from evolving polymorphism equal to the capabilities it derives from using it? The answer depends on what type of virus is being made. Surely, if your virus does not need to be in the wild for a long time... evolving polymorphism is of no use. For a long term virus however, one which will come back time and time again to haunt both users and the AV... evolving polymorphism is an option.

There are still problems with evolving polymorphism, and no statistics of its viability have come out yet... however the theory behind it is sound. Any circumstance that requires the usage of a giant complex polymorphic engine, may also indicate the need for evolving polymorphism.

Taking a walk on the AV side

As you can see, the AV really have their work cut out for them... but where will the AV program of the future progress? Most probably the AV will realise that emulation systems are a great tool for detecting polymorphic viruses, or at least, bypassing encryption so as to apply scan string technology. However as metamorphism comes into play, emulation systems will no longer work as they depend on scan strings after virus decryption, something metamorphism defeats in itself.

What will probably occur, is the development a generic virus detection heuristical emulation system, somewhat like the emulation systems out there at the moment, but tracking registers properly before interrupt calls to really track the progress of the program and provide better heuristics, to see what it is doing. This will be slow and inaccurate, but as computers get faster, the speed won't matter... and people may demand better virus protection at any cost.

Alternatively, scanners may be thrown completely out of the window, and the age of integrity checkers will come back. Under Win95 however, there is no way to boot clean, and then complex stealth mechanisms may save the day for the virus ;) Or, behaviour blockers may come back into vogue, however that in itself is doubtfull.

What may also happen, is an automated system to detect and remove viruses, just like an AV developer would. This would be the ultimate virus tool, and I'm sure such a thing would be very funny if it came out. However, do not understimate the power of 'artificial intelligence' and expert systems....

At the very least, the next few years, if the AV do not become even more of a dying breed... the development of AV programs will be interesting.


At the end of all this, are we any closer to saying wether certain virus techniques are worth using? Yes, a little. However, like always, it really depends on what each individual virus is going to be used for.

So what was the use of this paper? Simply to OPEN YOUR EYES so that you REALIZE that the ultimate virus needs not include every technology under the sun, simply because all technologies have problems associated with their use. Maybe the time and effort of coding such techniques, along with the space used in the virus, could be better spent on creating evolving polymorphism ;)

And how did we go at predicting the future scene? Pretty well I would imagine, but the future really was layed out already ;) What will be the most interesting is what the AV do to counter new virus technologies such as evolving polymorphism and metamorphism ;)

If you liked this document, I have 4 documents in a series on the tunneling component of viruses, which is a very interesting technical read, another on the concept of computer viruses being living creatures which is an interesting philosophical read, and a document on evolving polymorphism, which is an all round great read ;)

[Back to index] [Comments]
By accessing, viewing, downloading or otherwise using this content you agree to be bound by the Terms of Use! aka