Hyperconverged systems are challenging the status quo in the data center. All-in-one solutions from companies like Nutanix, SimpliVity, and Scale Computing are credible alternatives to traditional architecture that requires separate purchases for compute power and data storage. Beyond ease of use, the message is clear to potential customers: hyperconverged systems allow for scale-out simplicity as you grow.
But a critical piece is missing from these systems, which may end up stopping the hyperconvergence movement before it goes any further: integrated networking.
Hyperconverged products concentrate on optimizing the storage controller along with the CPU. By creating more efficient ways of writing data to disk you can control how data is accessed and stored by the system. This allows for architecture to be built around the idea that customers must eventually buy more units to grow. Traditional architectures that do not unify storage and compute do not integrate as well as hyperconverged solutions because they were never designed to work together as tightly from the beginning. Hyperconverged systems can be sold in consumable unit sizes that are easy to digest and expand as a business grows or as more of the traditional infrastructure is replaced by these newer systems.
The weak point in this new hyperconverged world comes at the interconnect level. Hyperconvergence vendors assume that storage and compute are their playground and more nodes will be sold to satisfy requirements in the future. However, to interconnect these nodes they must rely on existing network infrastructure. In some cases this isn't an issue, as the data center network is fast and reliable.
But organizations often consider hyperconverged systems as replacements for aging hardware in the data center. If these solutions are replacing old compute and storage units, what does the network look like? Can it provide the high-speed interconnects necessary to reduce latency between hyperconverged nodes? Will it be reliable over time to ensure that communications aren't lost between cluster members? Existing data center networks likely are not built to handle the kinds of low-latency interconnects that hyperconverged solutions require. The migration away from three-tier traditional architecture to the hyperconverged friendly, two-tier spine-and-leaf setup is underway, but by no means is a certainty in a given environment.
Figure 1:
Image: HebiFot/Pixabay
In order for hyperconverged systems to truly feel like a complete solution for the modern data center, it's very important vendors start offering networking with their products. Assuming that the existing infrastructure is up to par isn't good enough to ensure everything will run at peak efficiency. Networking interfaces in the hyperconverged nodes shouldn't be an afterthought. They should be integrated into the solution just like a storage controller or RAM module.
Additionally, when additional nodes are brought online in an existing cluster, there should be some kind of spine interconnect available to help pull all the pieces together. This not only helps ensure the proper requirements are met for communications, but also helps during pilot program phases and lab tests. Helping customers contain their proof of concept with a self-contained network will help adoption rates climb since no one needs to wait on the networking team to provision ports for a test.
So how can hyperconverged vendors add networking without adding huge R&D costs? White-box switching is certainly one option. The availability of low-cost hardware running operating systems with extensible API support and programmable operating systems should allow hyperconvergence vendors to integrate networking components into their management interfaces with very little difficulty. This would make the networking aspect as seamless as any other piece of the solution. With a powerful management platform driving the underlying infrastructure, hyperconverged solutions could have very different performance capabilities when compared to traditional three-tier and leaf-spine architectures.
Another solution is to partner with a pure-play network vendor without aspirations to grow into a competitor. I recently spoke with Mat Matthews of Plexxi about this very subject at VMworld. He told me that hyperconverged vendors are including Plexxi data center network switches in their total package. By working with third parties that can provide networking solutions and none of the other pieces of the total hyperconverged system, there is a sense of protection from any changes in approach down the road. The third-party network partnership would be a great approach for those hyperconverged players that don't have the development staff needed to integrate a white-box solution or the support staff expertise necessary to troubleshoot network issues when the entire stack is branded by one company.
Networking is not something that can simply be ignored by hyperconvergence vendors. Hoping that the network will be good enough to run a cluster is totally different than knowing for sure that the network can help it run at peak performance and impress a customer. It's time for converged systems manufacturers to realize that the stack doesn't stop at the Ethernet port on the back of system. It extends across the entire cluster and includes every piece of the system.