The networking industry has always been obsessed with speeds and feeds. At the end of the century, proprietary hardware rose to the top of the technology heap, responsible for increasing both in the network. We gained all sorts of new terminology, like backplanes and crossbar switching. We screamed at each other over which QoS technique was optimal for achieving the network scale required to handle the burgeoning demand for online everything, and IP networking became the norm, with a spate of 802.x protocols leading the way. And for a while that worked pretty well to manage the scale necessary to help the Internet get through its awkward teenage years, when Web 2.0 and ecommerce took over and placed even greater demand on everyone’s infrastructure.
Fast forward and the Internet has grown up. With over 4.2 billion users online, speeds and feeds in the network are still the No. 1 priority.
But today, it’s not just about the speeds and feeds of packets. That’s the easy part. The hard part now is operational speeds and feeds. Being able to scale the network is no longer a matter of adding one more port to a trunk, or moving from 10 Gbps to 100 Gbps; it’s a matter of management and orchestration, and how quickly you can provision, configure, and scale out the network.
Today’s network needs operational agility as much as dev and ops. Architects and engineers must be able to rapidly provision, configure, and scale network functions with the push of a button or the execution of a script. Just as there’s not enough time to stack and rack to scale, there’s not enough time to manually provision the capacity necessary to keep up with application and user demand.
This is where all top-level architectures like NFV and SDN come in handy, because they’re all related to improving the network’s operational speeds and feeds. With their focus on orchestration and the deployment process, they take advantage of virtual network functions (VNFs) and frameworks like OpenStack to improve the speed and feed of provisioning network capacity in all its various functions. From load balancing (as a service) to compute (as a service) to core networking (as a service), management and orchestration (MANO) is becoming the overarching theme in scaling networks without breaking the bank -- or the engineers’ backs.
While originally focused on service providers -- where network scale was increasing exponentially and by the second, it seemed -- NFV and VNFs are rapidly becoming an enterprise-enabling technology capable of providing the agility required of modern networks. Thanks to microservices and mobile computing, the demand for agile, responsive networking continues to put pressure on network teams to provide the kind of push-button provisioning available to organizations in the public cloud.
Building that kind of environment on-premises requires the kind of abstraction provided by frameworks like OpenStack, offering organizations the abstraction required to enable the entire network --from layer 2 through layer 7 -- as a set of automatically provisionable services. Only by building out such an environment -- which may or may not be called cloud -- can organizations realize their goal of providing the level of agility and rapidity of provisioning demanded by the business today.
The increases we’ve seen in OpenStack adoption over the past year and a growing preference for virtualized network functions are thus no surprise. The network has always responded to changes in application architectures and demand with sometimes dramatic changes to improve packet speeds and feeds. The TLA soup being adopted now is not only a response to a basic need for packet speeds and feeds, but the associated requirement for improving the operational speeds and feeds that enable the network teams to scale as easily as the network itself.
Looking for more hot technology trends? Learn about the Future of Networking at a two-day summit presented by Packet Pushers at Interop Las Vegas, May 2-6. Don't miss out -- Register now!