Comparative tests of complex enterprise products are hard to come by, which makes solution testing an invaluable step in selecting the right hardware and software for IT and networking initiatives. Most engineers agree that there is no more effective way to ensure that their decisions accurately and effectively address performance and business imperatives than rigorous side-by-side comparison of capabilities. When done properly, such testing makes it possible to make technical decisions with confidence, even in the absence of case studies and peers who can share real-world experience with the technologies in question.
There is, however, another side of testing that is equally familiar to veterans of our industry. When done incorrectly, vendor-commissioned testing that aims to “help” buyers can by its very nature be subjective and, in worst cases, be undertaken to deliver pre-determined outcomes that will reflect the same winner time and again.
So how can we combat this problem? To begin, we can start by demanding adherence to four of the most rudimentary testing best practices:
Conduct apples to apples comparisons: This should perhaps be obvious, but in light of the dramatic changes taking place in many sectors of IT, the simple question of whether two products can be effectively compared is more applicable than ever. Different companies have different ideas on how to solve the same problem, and not all involve comparable point products.
The cloud storage industry, which increasingly offers the ability to sync and share files and take data snapshots, is a great example. Comparing such offerings with traditional backup or collaboration technologies in no way tells the complete story. Yes, they do the same thing to a point, but to say a cloud storage system is a data back-up or collaboration solution (and to test it against those applications) is seriously flawed.
Test the full system, not just a component: As more vendors offer “platforms,” it’s more important than ever for tests to distinguish between components that operate in an independent fashion and those that are designed from the start to function in a specific architecture. Deduplication technologies are a great example. Standalone deduplication products, well known to storage and networking experts for years, function very differently than today’s next-gen deduplication solutions that are used in conjunction with flow selection and other filtering technologies.
Failure to adhere to this known and basic fact enables the tester to purposefully outrun the capacity of the system and create flawed results such as packet loss -- something filtering prevents in real-world applications.
Perform multi-dimensional tests: Real-world applicability of a product requires attention to multiple dimensions. Performance claims, for example, would turn out to be grossly exaggerated if they are not measured against a full spectrum of functional capabilities.
Testing independence and transparency: Tests commissioned by vendors of any kind should be thoroughly vetted by those with the expertise required to determine if the process followed was appropriate from a technical standpoint and generally unbiased. A test is only as good as the methodology on which it is based.
Most importantly, remember that there’s no replacement for the experience of others. Users, whether they are in beta or commercial environments, and references who can provide real insight into actual use cases are, and will continue to be, the best sources of information regarding how any technology performs in practice. In a world where the stakes to make the correct choice are high, it’s never been more important, or more effective, to seek out people in the know.