Thus, it follows that if you want to hook all these together at the highest speeds possible you want little more than aggregated, clean throughput none of the fancy packet processing that gets built into router ports, but straight-ahead forwarding. 10GigE switches, located at the boundaries of Grid nodes, will be on-ramps to the connective tissue of the Grid, the optical transmission networks. In the metro, optical Ethernet solutions can provide low-cost transport over dark fiber, via CWDM or DWDM, whichever makes the most economic sense for the provider.
Further down the road, Grid service providers will be asking for more than just forwarding performance out of their switches. Already, a great deal of work is underway to support security in distributed computing systems. Switches may have to incorporate higher-layer functions of security, QOS, etc., to remain relevant to increasing service demands from these new service providers.
Which brings us to number two, the reemergence of the ASP, or perhaps GSP, for Grid Service Provider. These GSPs will be quite distinct from our telcos which may find themselves reduced to being little more than pipe suppliers focused entirely on providing a robust Grid for customers to access for any number of computation- and storage-intensive applications.
According to Force10's Mullaney, online gaming is an early application of grid computing. "The infrastructure demands are huge to be able to support a scaleable gaming platform. That's perfect for folks like IBM to host people like Butterfly.net. It lets them focus on the gaming applications while IBM focuses on the infrastructure. It also lets Butterfly.net become profitable because they don't need to worry about how many servers to buy. IBM builds a grid infrastructure to support them and dynamically adds more compute resources as needed. In this way Butterfly does not have to go buy and install the maximum number of servers to handle peak loads."
Abbas of Grid Technology Partners sees things developing this way: "Grid Computing will gain a foothold in the product life-cycle/R&D side of the firms first. These groups already have defined computational requirements, as evidenced by their existing investments in high-performance computing and clusters. For example: electronic design automation applications at semiconductor companies; computational fluid dynamics applications at automotive and aerospace industries; bioinformatics and proteomics applications at big and small pharmaceutical and biotechnology firms. Most of these applications already have been parallelized for deployments on clusters and high performance computers. This phase has started already.