For many data centers the move from one level of bandwidth speed to another happens primarily when that upgrade in speed is the same price as the current speed. For example 8GB Fiber Channel (FC) is now almost the same cost as 4GB FC, so it makes sense to upgrade. However, I expect that the next wave of bandwidth upgrades will come at a substantially accelerated pace.
Assuming for a moment that most data centers will be primary 8GB FC and 10GBE over the next several years, the move to the next faster bandwidth speeds--16GB FC and 20GBE--may happen soon after that. A key differentiator between now and other bandwidth shifts is we have the horsepower to take advantage of potential increase in speed. The storage is fast enough now with solid state disk (SSD), the servers have the processing and I/O capability to push data at these new speeds, and applications are now being written in a more concurrent fashion to take advantage of multiple cores and multiple I/O paths. This means that at end points of the bandwidth there are targets and initiators fast enough to take advantage of any bandwidth increases the moment they become available.
We can learn a lot about bandwidth utilization from the techniques used to maximize SSD performance. To do this requires either scaling up the application by altering it to take advantage of running on faster processors and increased I/O path, or by scaling out performance by spreading the application across multiple server nodes or dramatically increasing user count. Increasing concurrency, scaling out, seems to be easier than scaling up. In a way this is what we are doing with server virtualization and I/O convergence. We are increasing the concurrency or number of tasks that the server or the network connection is responsible for.
Server virtualization is a great example of more tasks in the same space. With the next generation of processors soon to be released, virtual machine densities may quadruple from their current levels. Even with today's processors, most server hosts are not very CPU constrained. Limits on VMs per server host are being caused by concerns around data protection, storage management and network management. However, those limits are quickly being addressed by improved image backup handling and improved networking techniques. There is also improved capability to control how this bandwidth is utilized through VM level QoS capabilities using VMware Netqueues or NPIV. Being able to provision bandwidth at the granularity of the VM is critical in taking full advantage of these faster pipes.
With topology convergence we are doing more across a common cable interconnect. As I wrote in my last entry, that seems to be Ethernet. We have the traditional IP messaging load, VOIP, storage traffic, potentially IOV traffic plus who knows what else coming in the near future. Not only are there more tasks coming from each initiators, there are also more initiators being put on the same connection.