It used to be that the only way to try and optimize a network was to strictly adhere to best-practice guidelines that the network equipment manufacturer recommends. While this remains a valid process for brand new networks, existing networks can and should be further optimized over time. Cloudsorucing network performance is one emerging method that can be used to get the most out of your existing network with relatively little time and effort required.
A comparison-based optimization methodology
We’ve always been told to learn from the failures of others so that we don’t inadvertently repeat those mistakes. While this is sage advice, we can also learn from the successes of others. As network architects and administrators, however, designing and deploying a network that’s highly optimized has often come down to painstaking trial and error techniques. Because we have no ability to quickly and easily compare how our network performance stacks up against others, most have no clue as to whether a network is properly optimized – or if configurations have accidentally slowed it down.
Fortunately, cloud-managed network service providers including Aruba, Cisco Meraki, and Extreme are beginning to offer the ability to gauge the performance of their existing network by offering a glimpse into similar customer networks and how major or minor changes to network configurations could greatly enhance performance from an end-user perspective.
Known as cloudsourcing, cloud-managed network service providers use artificial intelligence (AI) to collect performance-related data from their existing customer base – and have it analyzed to benchmark and see what types of wired and wireless network configurations produce the ideal network performance numbers. Customers that are interested in making performance adjustments can then compare how their underperforming setup differs from those that excel and what changes can be made to squeeze out improved latency, jitter, and packet drop numbers.
How cloudsourced network optimization works
In a recent Cisco Meraki blog, for example, the company announced a beta feature known as Meraki Health. As part of this initiative, customers will be granted “data-driven recommendations for ways to improve performance and capacity based on observations derived from over 3.4M unique networks on the Meraki platform.” While this may not seem like a big deal, it represents the ability to see how your network stacks up against similar networks – and what configuration changes could be made to improve performance in specific areas.
Having access to anonymized performance and configuration data from other network infrastructure customers that have similar architectures removes much of the guesswork when it comes to network optimization. As most of us know, even the smallest configuration change made to a network can bring about significant performance gains. However, it is often difficult to pinpoint which of those configuration "knobs" should be adjusted as there are literally hundreds to thousands of them. This is where the use of collected streaming data telemetry, performance baselining, and AI come into play. In the background, artificial intelligence first baselines the customer networks that are outperforming all others. Network configurations are then compared to see which configuration settings are providing the most optimal results. These configurations can then be recommended through the online health portal to other customers that have networks that are lagging. If those recommendations are applied, network administrators can see if their own performance baseline changes toward the positive and whether other changes could potentially bring about further performance gains.
Monitor and compare network performance over time
The other great aspect about cloudsourcing network performance is that it follows a continuous improvement lifecycle process. The cloud-managed platform is constantly pulling in network performance data – and if changes to application uses, data flows, or other aspects on the network change, so will the performance baseline and configuration recommendations. Thus, this is a process that is not a "one and done" but rather one that is always on the lookout for ways to make improvements.