For the entirety of its existence, the network has functioned just like a utility company: a service available for all manner of useful functions. It has almost no visibility for developers and the applications they write to use the network.
Now, the call is going out to make the network responsive to application needs. We have to create harmony between two things that, until now, have been almost entirely ignorant of each other. But if the network becomes more responsive to applications, applications also have to give something back to the network.
QoS was one attempt to make the network more flexible based on different application requirements, such as voice. Not that the network engineers got it right the first time; it took priority queuing, weighted fair queuing, class-based weighted fair queuing and finally low-latency queuing to capture the characteristics of voice traffic.
It might have been much simpler in the beginning if the voice stream could have told the network, "I need to have some guaranteed bandwidth to make sure my packets get to their destination in the right order. But I don't need all the bandwidth on the link."
Now, the promise of SDN says that the network can respond to application demands. Apps can ask for things and the network can be reconfigured to provide resources as needed. That's a powerful tool for developers. It's like teaching children to ask for things instead of them shyly hoping that they'll be given something.
There's a lot of talk about the transformation of the data center via software, but what's real and what's hype? Get some perspective in "Software Defined Data Center: Marketing or Meaty?."]
The ability to affect your operating environment is huge for developers who want to ensure availability and performance. But that ability comes with some responsibility.
For example, applications need visibility into network conditions to be able to request resources. Those same applications also need to be able to listen to network conditions and respond accordingly when those resources aren't available.
Think of it like a GPS in your car that has a real-time feed for local traffic conditions. If a particular link is blocked, the network should be able to make that condition known to the application and suggest alternate links. One might be high speed but carry a higher cost. This would be critical for real-time traffic processing. The other option may be slower but less expensive. For bulk data this would be ideal. Traffic can be processed appropriately as needed based on network conditions.
You're probably saying, "That's silly. Why not just make the decision for the application? Why provide feedback at all?" That's a good point. Think about the first generation GPS units that did automatic traffic rerouting based on conditions. You might get offered a strange surface street route to your destination instead of getting a toll road that would cost you something but get you there much faster. Would you ask your GPS why it chose that particular route? What if you have a pocket full of quarters and couldn't be late that day? Normally, you'd love the surface street option. However, conditions change and default choices need to be reexamined.
Making the choice is easy. We've built enough intelligence into the network to rapidly decide which route is best based on a number of conditions. What's important in the new network is offering the application a choice based on data that the network can provide.
If we can reconfigure on the fly and tag packets to choose certain links, whether it be through a tunnel to an exit point or a hop-by-hop tag protocol, then we should offer that choice to the application developers.
Why spend our time making the network do all the heavy lifting? Let the application make the decision before the first packet is sent. The network should just respond to the chosen information provided at a higher level and send the packet to the selected destination.
Again, QoS is a good example. It can only make packet decisions based on a small amount of information, like source IP or destination port. Some vendors have implemented advanced matching, such as Cisco's NBAR, but those are not universal by any means and don't track from device to device. To be really useful, QoS needs a big-picture view.
It will take some work to ensure that the applications don't take advantage of their influence. A set of policies can be enacted to discard application criteria when they are outside of a baseline or a threshold. For instance, maybe the satellite link is only for priority traffic in the event of an outage. Even if an application requests that link, a lower-order rule can prevent that traffic from transiting a given link.
Triggers can also be built in along the way to send notifications to stakeholders when developers get greedy and send their traffic along expensive or priority links. This can prevent sticker shock later on when a developer unknowingly prioritizes an expensive link for non-critical traffic. Those are the kinds of safeguards that the bean counters love.
The network and the application can no longer exist in separate black boxes. At the same time, these two need to learn to get along with each other and work together to make life easier on everyone. We've spent most of our lives trying to make the network listen. Now it's time to make the applications do the same.
What do you think? Should applications be responsive to the network? Or should the network stay quiet and make all the decisions absent of application input?
Are you gearing up to bring QoS to your network, or do you want a deeper understanding of how best to configure it? Check out Ethan Banks' workshop, "How To Set Up Network QoS for Voice, Video & Data" at Interop New York this October.