Link Trouble: Watching the Dectectives Isn't Always Pretty

Carl Weinschenk

Here's a report in ZDNet that could ruin many CSOs' evenings: The firm n.runs AG recently ran tests that discovered about 800 vulnerabilities inside antivirus products.


The significance of this can't be overstated. Vulnerabilities were found in every virus scanner on the market. This isn't just finding out that a piece of software is vulnerable. It's finding that the very products that companies rely on for protection -- and that are embedded in the deepest and most vulnerable recesses of the company -- are themselves insecure. Indeed, it's the ultimate inside threat.


It seems that all is not well in the security software world. This week, AVG said it is changing a component of its antivirus software called LinkScanner. The update is included in the firm's Anti-Virus Free Edition 8.0 and as a patch for paid versions. The problem is that an element of LinkScanner call Search-Shield visits sites repeatedly, using bandwidth that sites pay for. The visits also skew Web analytics programs to report too many visitors.


Still another issue recently was reported. Channel Register reports that Trend Micro is withdrawing its software from Virus Bulletin 100 tests. The tests, the story says, assess how well antivirus software packages work by gauging the level at which they detect viruses on the WildList, a listing of circulating viruses. Certification depends on detecting the viruses without false positives. Trend Micro, however, says that the tests are antiquated because they don't account for newer behavior-based antivirus procedures and don't react quickly to changing conditions. The second half of the story provides interesting give-and-take between Trend Micro and the Virus Bulletin.


n.runs AG's research, AVG's tweak and Trend Micros decision all focus on issues with legitimate security software. This post at Bill Mullins' Weblog - Tech Thoughts deals with another issue: Phony security software. Mullins offers a well written and fairly frightening explanation of this class of malware. A free version of the software gets into the machine via the Zlob Trojan, browser exploits or when it is downloaded from criminals or adult sites. The software does a fake scan, reports the presence of malware, and asks the user if he or she wants to download the full version of MalwareProtector 2008 to handle the problem.


That download doesn't delete the false warnings and, in addition, unleashes a torrent of desktop shortcuts, icons and other elements, all of which gum up the computer. Rejecting the download option launches a screen saver with the image of cockroaches eating the screen.


It is impossible, of course, to definitively say whether the noise in this sector is a sign that the category is under siege. There is some good news, however: What Brian Krebs at The Washington Post says is a complentary approach to antivirus protection has been introduced by a company called Bit9.


Instead of identifying the malicious programs, the software, which was developed under a grant from the National Institute of Standards & Technology (NIST), compares programs its finds against 28 antivirus engines and puts the "clean" ones on an approved list. Krebs says that if the code being inspected comes up on one engine as dangerous, the user is warned. If it comes up on two or more, he or she must manually override a setting that keeps it from running.

Add Comment      Leave a comment on this blog post
Jul 17, 2008 6:32 AM Mario Vuksan Mario Vuksan  says:
The most important thing about whitelisting (at least how Bit9 defines it) is that user (or your IT administrator) defines what is whitelisted and what is not. That decision can be illustrated by multiple anti-malware scanner results and by the host of other security parameters. Or it doesn't have to be, as many security applications will flag the popular remote access package VNC as potentially unwanted or outright malicious. Reply
Jul 22, 2008 6:11 AM Paul Zimski Paul Zimski  says:
Ive noticed that the prevailing interpretation appears to be that whitelisting is the exact inverse of blacklisting or some magic list of all known good things that the world can subscribe to and be automatically protected for perpetuity. While this is a lofty goal and is part of the essence of the whitelisting movement, the reality is that whitelisting solutions can be deployed in different modes and can extend trust to applications based several criteria: if they are known, where they came from, who or what is trying to install them and who created them. The end result is a selective trust architecture that can be throttled up or down to validate what should be introduced into a system. One end of the spectrum is a very binary view of the world: if I dont know what this thing is, Im not going to let it run (pure whitelisting). The other extreme is that Im going to let anything run unless I have a reason not to (the old AV model). The middle ground is, Im going to optimize my security by trusting what I know, blocking what I dont like and make reasonable attempts at validating the unknown. Mature whitelisting solutions typically have this concept of a trust engine. These are a rule based set of criteria to automate what new or previously unknown executable content can be introduced into the computing environment. Trust is extended to content that meets the rule requirements and automatically allowed to run, but flagged for analysis. Examples of this dynamic trust would include: allowing a process to introduce change such as an auto-updating anti-virus engine, or a patch management agent; trusting publishers of digitally signed code; trusting change within a specified local directory; or trusting admin users to install valid software. The promise of whitelistings extended trust is increased system integrity and stability, lower IT support burden and increased end-user productivity. Anti-malware and threat protection are also hugely increased as a by-product of whitelisting and this is what the security community generally focuses on. Its interesting to note that whitelisting is an operational approach to threat protection rather than what most would consider a security approach (youre not really looking for bad stuff, but you block an awful lot of it based on operational due diligence). Its this operational characteristic of whitelisting that makes it a perfect candidate to integrate into vulnerability and change management platforms. Why? Because these are the systems that understand what applications, patches, etc. are being deployed and used in the environment. The metadata available to the whitelist library is more valuable in this approach as well. Admins can be made aware of vulnerabilities that are being introduced when trusting a particular application and what configurations and patches should be subsequently applied. Reply

Post a comment





(Maximum characters: 1200). You have 1200 characters left.



Subscribe to our Newsletters

Sign up now and get the best business technology insights direct to your inbox.