While network tool sprawl has been regarded with some derision and seen as an out-of-control result of mismanagement or knee-jerk reactions to changes in solutions or conditions, it has existed for valid reasons. The fact is that network demands, security threats, and the evolving technologies to meet these changes and advances are real, and they require new and different solutions.
It is easy to see why the ability to swiftly apply a new solution or make a change may be important. Given the immense dynamic range of the network and security threats, the ability to maintain flexibility and agility with solutions to keep up or stay ahead is vital. At the same time, proper controls must be in place to ensure that network efficacy is in no way compromised in the process.
There are three principles that help organizations effectively manage the propagation and deployment of tools or solutions on the network. Following these will enable keeping the traditional problems associated with tool sprawl in check while providing the flexibility and breadth needed to secure, manage, and maintain networks.
First, organizations need a way to achieve flexibility and deploy what is needed in a timely manner. At the same time, with every new deployment or change, there need to be protections in place to prevent a solution from undermining the performance, reliability, availability, and scale of a network. Many organizations, unfortunately, see these requirements in “either-or” choices. This is exactly why it is so difficult and challenging to deploy new solutions on the network.
Most companies have a “guilty until proven innocent” type approach towards adding new solutions or changes. Establishing proof and qualifications of whether a solution can fully meet the requirements for network efficacy is long, involved, and often not straightforward. Sometimes requirements are fully established and articulated, but often the qualification process is not clear-cut and may even be decided dynamically. Sometimes fear is a primary driving factor, and it produces a strong bias towards severely limiting what can be deployed.
Maintaining the health and power of the network is a completely reasonable requirement, but at the same time, organizations need to adapt to changing threats and situations by deploying the latest technologies to stay ahead. Too many have been hamstrung to the extent that some rarely even attempt to deploy something new. These difficulties become one of the more significant challenges security teams face.
Organizations need a pre-approved deployment point that can vouch for the effects of a solution on the network and prevent any interference with performance, reliability, availability, and scale. Such devices that can serve as a deployment hub and as an intelligent cross-connect are just now coming to market. Ideally, such a deployment hub would also overcome deployment challenges associated with limited ports for accessing traffic, rack space limitations, and operational complexities of deploying in-line and out-of-band solutions.
A second major principle is to have the ability to establish processing order and intelligent service chaining. This includes not only the ability to easily define the sequence in which traffic is processed or acted upon, but also the tools to determine the health of a solution and on what conditions a solution should be bypassed, divert its traffic elsewhere or stop its flow altogether. Sometimes this call for ordering, or the deployment problem that arises because of the lack of it, is the primary concern one has over tool sprawl. Being able to establish intelligent cross-connect service chaining and to establish order over the "chaos" will ease objections and pave the way for a greater number and variety of solutions, as needs dictate.
A third principle relates to the management of data and traffic flows to meet compliance requirements and implement consistent policies for security and other network mandates. Such management and traffic flow mediation and adaptation, for instance, should determine which traffic, if any, can be decrypted, where it is decrypted, and what is done with the decrypted output, including issues of storage and transmission. Data management and traffic flow mediation may also be required to determine when data extraction should be used to provide metadata rather than acting on the entire content of each packet. Or when data masking of private and sensitive information may be required.
Taken together, a deployment strategy based on these principles can open the door to an entirely new level of security, network management, and troubleshooting. With a newfound ability to be agile and stay ahead rather than lagging behind, organizations can vastly improve their security stance to become proactive rather than reacting to threats long after they have become reality.