According to data from Crehan Research, 100 and 25 Gbps Ethernet speeds increased by 40% year-over-year and accounted for 24% of total high-speed NIC revenue in 2020. There is a clear march to higher data rates, with 100 Gbps being the primary target. Below are my thoughts about what is driving enterprises to upgrade their connectivity to 100 Gbps.
The first answer is the normal progression of IT infrastructure, including the networks, towards faster, smaller, and cheaper. But beyond this, there are four additional specific factors that are driving enterprise interest in higher data rates.
1) Compute-intensive workloads are growing: High-performance computing and interactive applications that are sensitive to throughput, latency, and jitter (such as telemedicine and high-frequency trading) will always demand the highest possible speeds – and thus benefit greatly from 100 Gbps. Organizations that provide these applications as services have been early adopters, and as these workloads have become more common, the demand for 100 Gbps has grown along with them.
2) Increasing network loading: Broad adoption of mobile and a myriad of online services over the past several years has fueled the number and durations of connections, as well as the amount of data transferred over them. The Covid-19 pandemic increased the use of eCommerce and online services, and an unprecedented number of knowledge workers shifted to working remotely. Many organizations have already announced that they’re shifting to permanent remote work for some or all their employees. All that means much more traffic over the network, particularly for collaboration apps like Slack, Zoom, and Jira. This sudden and perhaps permanent transformation has two implications that are driving 100 Gbps adoption. The first is simple traffic volume – with more companies and workers depending on digital workloads, the amount of traffic has dramatically increased compared to expected trendlines. The other implication is that much of this traffic (like video) is exceptionally sensitive to latency and jitter. Moving traffic faster increases the overall throughput to eliminate bottlenecks.
3) AI transformation: AI (which is a specific type of compute-intensive workload) has reached a point where it’s being integrated into every major type of enterprise operational technology, including marketing, HR, sales, eCommerce, manufacturing, customer retention, IT operations, etc. This increasingly broad adoption impacts networks in two major ways that come down to throughput and latency. First, AI and its underlying ML models increase the amount of East-West traffic because training machine learning models is very data and I/O intensive (i.e., model training requires a lot of bandwidth). And the faster models can be trained, the faster they can be put into production to provide the intended benefits. Second, AI-powered solutions often must deliver results in real-time and are built on high-performance architectures to give millisecond responsiveness to a customer interaction. All of this necessitates transmitting data as fast as possible, which at today’s current commercially available standard is 100 Gbps.
4) Maturity and decreasing cost: As these other three factors have increased demand, network infrastructure technology (and the accompanying security and networking monitoring toolsets) has steadily matured, and the cost of the underlying components has gone down. Now there’s a strong supply of hardware to meet the demands for higher speeds.
Managing the upgrade process
Upgrading to 100 Gbps in the data center involves much more than just replacing infrastructure. Here are some important considerations that enterprises should keep in mind while planning an upgrade to 100 Gbps.
Will our tooling keep up?
Organizations must update their network security and performance management tools in lockstep as they upgrade their core networks to 100 Gbps. It’s a best practice to upgrade these components early in the process of moving to 100 Gbps, so security protections are maintained, and IT can monitor the network and troubleshoot any glitches. At the very least, the monitoring fabric must upgrade before or with the core network and provide a bridge to tools with lower ingestion data rates. Doing so lets them compare metrics from before and after key upgrades. Security solutions like Network Detection and Response are even more critical at higher data rates because clever criminals attempt to exploit higher and faster traffic by gently tiptoeing through the network with low-and-slow attacks, knowing that higher speeds often equate to missed packets, and thus missed intrusions, if monitoring and security systems aren’t upgrades to support the new speeds. Similarly, for performance management, bottlenecks can have higher consequences and be harder to troubleshoot.
How to bridge between high and low-speed network sections?
Complex enterprise networks are rarely upgraded in their entirety all at once. A partial upgrade may necessitate the use of network packet brokers (NPBs) to bridge between new and old sections of the network that operate at different data rates. NPBs can also be used to match the ingestion limitations of security and monitoring tools to extend their useful life.
How do we maintain network visibility at our upgraded speeds?
Lossless monitoring and the real-time observation of metrics, processing, and routing is technically challenging for data packets that go through the monitoring fabric every 6.7 nanoseconds. Enterprises must carefully evaluate prospective monitoring solutions for the ability to reliably acquire data packets and observe key performance indicators with the resolution that corresponds to their services and use cases. As an example, if an application cannot tolerate more than five milliseconds of latency, then the monitoring solution must observe latency with at least millisecond resolution.
In summary
Standardization on the 100 Gbps data rate has occurred, and it will be in place for years to come. Every organization is either using this data rate or is planning to. As with all generational changes and upgrades, plan carefully, make sure that the monitoring fabric is an integral part of that plan, and put that monitoring in place early to ensure a smooth and secure transition to the new data rate.
Ron Stein is Director of Product Marketing at cPacket Networks.