This approach was simple and relatively easy to manage, but it didn't scale--the more complex the technology got, the clearer it became that our existing setup just didn't do the trick. Each test took an inordinate amount of time, and concurrent tests often collided with each other. We had to make some major changes.
First, we committed to keeping more static gear running continually. Second, we added the infrastructure to support more self-contained multiple-project networks, each of which provides basic Internet connectivity, name-service support and a firewall for two or more projects while keeping each test network on its own segment. Third, we started using drive-imaging software for rapid system deployment (mostly Windows NT and Linux). Finally, we built a distribution rack that let us attach (literally) any two devices in the lab to each other with a single patch panel.
By striking a balance between static gear--Check Point Software Technologies and Cisco firewalls; Cisco routers; Cabletron, Cisco and Lucent Technologies switches; and Net Optics taps and other hardware, plus Layer 7 benchmarking tools like Spirent's products--and dynamic device pools--groups of switches, workstations and servers that can be repurposed--we're able to reduce setup time and increase efficiency. Today, we have a full-scale testing environment capable of supporting at least four tests on a variety of products simultaneously.
Our goal for the coming year: continue to diversify our production gear and decrease our provisioning times.
Ron Anderson is Network Computing's lab director. Before joining the staff, he managed IT in various capacities at Syracuse University and the Veteran's Administration. Write to him at [email protected].