Regardless of your initial design criteria, you'll probably end up rebuilding the system at least once. Testing--which we tackle in the next section--will almost certainly reveal flaws in your specifications, and deployment will uncover weaknesses in your testing methodology. So be prepared to adjust your design and build your secondary systems for the unexpected, with items like graphics-free Web pages for those spikes in traffic and resources. If a Web page with SSL (Secure Sockets Layer) has heavy graphic files that each require a new connection, performance can suffer miserably. Instead of forcing users to turn off image loading in their browsers to get around this kludge, build alternate pages without GIF images. That way, you can support more users during peak usage times.
Ironically, testing is the most error-prone part of the performance-planning process. Each component is analyzed for utilization, and the entire system is stress-tested. Trouble is, you have to test against your assumptions and biases, which are likely to be at least partially wrong. To catch these kinds of errors, make sure each of the discrete and holistic tests represents the actual usage patterns you expect. You should also test separately for the possibility of higher loads because of long-term growth, marketing promotions or seasonal demands. This will ensure that you are prepared for these projected changes, and that preparation may even provide you with alternative buildout scenarios. Short-term, off-site support systems may be adequate for spikes in growth in some cases, for instance.
For the routine usage tests, follow the behavioral patterns you pinpointed in your performance planning analysis. If an application exhibits a flurry of login activity followed by a leisurely pace of queries, mimic that in your tests. That real traffic pattern is more likely to expose the problems you'll encounter than staged frequent bursts of short-lived sessions.
Conduct your tests from both ends of the connection simultaneously so you can get a full picture of problems in your design. Testing must be performed from a user's location, using his or her equipment and network connections. If you want to roll out a system that uses handheld devices on a cellular network, test performance using the same handhelds and network rather than relying on a PC-based simulator attached to the server's local Ethernet LAN segment.
You should also monitor the performance of the server and its local network segment during these same tests, though--this will reveal the source of any performance problems. The handheld devices may be doing too much query preprocessing, or perhaps the cellular network is dropping too many packets. Or maybe the server's back-end database is causing trouble. The point is you can better identify these problems with holistic testing practices that mirror real-world usage as much as possible.
Run your tests for relatively long periods before taking any initial measurements--at least a few hours for a simple application or several weeks for a complex database. And don't introduce anomalies or increased volume until the simple stuff in the initial tests is working. Test static Web page fetches before CGI scripts, for instance, and test open connections before searches in an e-mail server. Once your tests are running smoothly, add these extra elements and simultaneously ramp up the volume. Then you'll be running a fully loaded test bed that represents all the diverse scenarios you predicted in your initial analysis. Adding layers to your tests makes isolating problems simpler: If your static Web pages operated smoothly but a new layer of tests of the CGI database searches shows sudden delays, you can see where the problem lies.