We always position access points in the identical location (on a small shelf just below suspended ceiling tiles)--one that's typical of a real-world installation. We take RF measurements in several spots and maintain the same physical orientation of client devices, again for consistency. We measure raw RF signal levels, conduct performance tests using Chariot or do ping tests to verify IP connectivity, depending on the nature of the project. When ping testing, we also differentiate between each product's maximum range at its minimum data rate and at a specific performance threshold. When testing 802.11b products, for example, we might lock the device in at 5.5 Mbps and measure the maximum range at that data rate. For 802.11a and 802.11g products, we might choose 12 Mbps as the minimum performance threshold.
For fixed-wireless product testing, we use calibrated attenuators to measure range and verify vendor claims about RF parameters including power output and receiver sensitivity. We also conduct field tests to assess the relative ease of system deployment, though we've found that it's nearly impossible to ensure a consistent outdoor test environment--we get more accurate product-performance comparisons in the lab.
Our original Green Bay lab mirrors a corporate campus/remote site: Half the network represents the company headquarters, the other half a branch office or the Internet at large. We use this facility primarily to test Layer 4 to Layer 7 traffic and network edge devices. Spirent WebReflector and WebAvalanche devices can fill our gigabyte link and push out 27,000 HTTP transactions per second. Our Dell OptiPlex workstations are dual-boot, with Windows 2000 and Red Hat Linux, and serve as Chariot endpoints, IOMeter clients and RadView Software WebLoad agents for SSL and multiprotocol traffic generation.
Patch panels on each side of the lab provide access to 128 runs of Cat 5e/6 cable we have hidden in the ceiling, so we can easily configure devices on either side of the network without relocation. This also provides a mechanism for dual-homing our white-box test client machines and pumping out traffic at a device from either side of the network. And though our Gigabit fibre link is almost always in use, the T1 provided by our pair of Cisco 7200 VXRs lets us test acceleration, caching and other devices intended to enhance low-bandwidth links. We often use the T1 for bandwidth management as well.
A 100-Mbps link with a Shunra Storm inserted is an excellent mechanism for testing hardware and software reactions to packet loss, latency and congestion.
We also do extensive storage testing in the Green Bay lab, where we have a file-share network that uses network-attached storage devices as well as standard servers. We have a small switched Fibre Channel SAN with 1 TB of RAID 5 storage and SCSI tape backup here, too, to facilitate testing of SAN hardware and software.