Data management & reporting: Over the years, we've tested enterprise-class firewalls, intrusion-detection systems, SIM suites and other high-level security systems. From those tests and our experience in the field, we've learned that reporting is both important to security professionals and often overlooked by vendors. IDSs, VA scanners and log aggregators maintain a great deal of data, but they're all worthless unless they can be used by the individuals they're supposed to help. Because a typical scan can return thousands of findings--all of which require analysis by security professionals--we placed a heavy emphasis on reporting capabilities.
We rated each product on its report content, ability to sort and cross-reference, and ability to export results to a transportable or shared medium. We also tested each application for its ability to report changes from previous scans.
Coverage: Because a vulnerability scanner is only as good as its ability to discover vulnerabilities, we rated each product's skill in accurately identifying system and application vulnerabilities on various OSs and platforms. We reviewed results from each product for accurate OS identification, improper identification of nonexistent vulnerabilities (false positives) and failure to identify known vulnerabilities (false negatives).
Performance and scalability: The performance of a vulnerability scanner often tips the scales on whether it will be a help or a hindrance. A scanner that reports a vulnerability after it has been exploited is pointless, as is a scanner that hits the servers it's testing with a DoS (denial of service) attack because it isn't tuned to scale down its assessment.
We reviewed each VA for its ability to fine-tune its assessment settings: Can the product's thread count and packet intervals be adjusted? We found a tremendous amount of discrepancy here, as several scanners by default scanned at an average rate of about 50 Kbps while others thrashed about at 3.5 Mbps. Although this won't account for an inordinate amount of an enterprise's network bandwidth, it helped us understand why several scanners took hours to complete our tests and others finished in minutes. We think it also helps explain why, during simple tests, such as Web crawling, some scanners crashed targets more frequently than others. When comparing apples-to-apples vulnerability scans, the products used about the same amount of total bandwidth; some were just tuned, by default, to do it quicker.