Similarly, distributing a system geographically can be less expensive than trying to build a massive system at a central location. Groupware applications, for instance, usually are cheaper to operate if they're distributed geographically because traffic is then contained within a region. Scheduling in groupware typically occurs within a department or workgroup, so it's not necessary to have all the traffic go to a central server. Distributing these systems also lets you offer more bandwidth-sensitive features like remote e-mail folders because there aren't any bandwidth constraints. This architecture is not for all applications, though. Web messaging environments, in contrast, work best with centralized servers.
With an overall distributed architecture, it's best to sign up with multiple WAN service providers. If your system will be accessed by the general public, for instance, you should buy connectivity from multiple providers to ensure you're creating the shortest and cleanest path to the largest number of end users. This tactic also limits your exposure to ISP outages because you won't have all your users in one basket--as long as you build in redundancy, that is.
Keep your service providers and their partners informed about the changing demands of your system. Remember that they have a supplier chain of their own: If you need an additional circuit, for instance, your ISP may have to go through the phone company, which in turn needs to upgrade some infrastructure equipment, and so on. Maintaining close ties to your service provider will prevent you from having to scramble for additional resources when there's a spike in your system's usage.
Change management is another key element in the buildout phase. Make sure all the related components of your system are running the same software versions and configuration settings and that you can upgrade them in sync. Testing might reveal some software version discrepancies, but it's easier to take care of these details from the beginning using change-management and replication tools.
And keep in mind that latency is cumulative, and too much segmentation can increase latency on the overall system. Say your system is split into 10 different components with each requiring 500 milliseconds to set up, process and tear down connections. That's five seconds of overall latency. You can reduce that latency time significantly with a centralized or less distributed architecture, but at the expense of scalability and, in some cases, efficiency.