Here on the VMware NSX team, we've been thinking a lot about the need for a new data center architecture to address continued security breaches. We believe that organizations must move away from security that's perimeter-centric, hardware-centric, and inflexible, and address the fallacy that simply piling on more security products somehow equates to better security. Instead, we must bring security insidethe data center and available for every workload, not just the critical or regulated systems.
We have called this ability to provide fine-grained security with enforcement distributed to every hypervisor in the data center micro-segmentation. Our customers have been doing a lot of talking about this as well, and it's no wonder, as more than half of our VMware NSX sales have been driven in whole or in part by the ability to do micro-segmentation.
After less than two years of being in the market and listening to customers' feedback, a clear pattern has surfaced pointing to howmicro-segmentation is enhancing data center security. We've clearly identified three essential requirements of micro-segmentation that ultimately translate to a more secure data center architecture: persistence, ubiquity, and extensibility.
Persistence: Security must be consistent in the face of constant change
Security administrators need assurance that when they provision security for a workload, enforcement of that security persists despite changes in the environment. This is essential, as data center topologies are constantly changing: Networks are re-numbered, server pools are expanded, workloads are moved, and so on. The one constant in the face of all this change is the workload itself, along with its need for security.
But in a changing environment, the security policy configured when the workload was first deployed is likely no longer enforceable, especially if the definition of this policy relied on loose associations with the workload like IP address, port, and protocol. The difficulty of maintaining this persistent security is exacerbated by workloads that move from one data center to another, or even to the hybrid cloud (think live migration or disaster recovery).
Micro-segmentation gives administrators more useful ways to describe the workload. Instead of relying merely on IP addresses, administrators can describe the inherent characteristics of the workload, tying this information back to the security policy. It can answer questions like: what type of workload is this (web, app, or database)?; what will this workload be used for (development, staging, or production)?; and what kinds of data will this workload be handling (low-sensitivity, financial, or personally identifiable information)? What's more, micro-segmentation even allows administrators to combine these characteristics to define inherited policy attributes. For example, a workload handling financial data gets a certain level of security, but a production workload handling financial data gets an even higher level of security.
Ubiquity: Security must be available everywhere
Traditional data center architectures prioritize security for important workloads, too often at the cost of neglecting lower priority systems. Traditional network security is expensive to deploy and manage, and because of this cost, data center administrators are forced into a situation where they have to ration security. Attackers take advantage of this fact, targeting low-priority systems with low levels of protection as their infiltration point into a data center.
In order to provide an adequate level of defense, security administrators need to depend on a high level of security being available to every system in the data center. Micro-segmentation makes this possible by embedding security functions into the data center infrastructure itself. By taking advantage of this widespread compute infrastructure, administrators can rely on the availability of security functions for the broadest spectrum of workloads in the data center.
Extensibility: Security must adapt to new situations
Aside from persistence and ubiquity, security administrators also rely on micro-segmentation to adapt to new and unfolding situations. In the same way that data center topologies are constantly changing, so too are the threat topologies inside data centers. New threats or vulnerabilities are exposed, old ones become inconsequential, and user behavior is the inexorable variable that constantly surprises security administrators.
In the face of emerging security scenarios, micro-segmentation enables administrators to extend capabilities by integrating additional security functions into their portfolio of defense. For instance, administrators might begin with stateful firewalling distributed throughout the data center, but add next-gen firewall and IPS for deeper traffic visibility or agentless anti-malware for better server security. But beyond merely adding more security functions, administrators need these functions to cooperate to provide more effective security than if they were deployed in silos. Micro-segmentation answers this need by enabling the sharing of intelligence between security functions. This makes it possible for the security infrastructure to act concertedly to tailor responses to unique situations.
As an example, based on the detection of malware, an anti-virus system coordinates with the network to mirror traffic to an IPS, which in turn scans for anomalous traffic. The extensibility of micro-segmentation enables this dynamic capability. Without it, the security administrator would have to pre-configure a different static chain of services up-front, each one corresponding to a different possible security scenario. This would require a pre-conception of every possible security scenario during the initial deployment (an impossibility of course!).
What's at stake?
Why is this relevant, and what's at stake? On the one hand, we have the status-quo data center security architecture. The tools available with this architecture are traditional physical appliances, virtual appliances (running as guest VMs on hypervisors), and agent-based security. Presumably, every high-profile breach we've heard about recently has had a mix of these tools deployed, and yet the breaches still occurred. So what happened? A lack of persistence and ubiquity made the traditional security systems incapable of securing all server-to-server traffic in the data center. A lack of extensibility meant that the security in place couldn't adapt to changing threat conditions. Ultimately, the mix of deployed traditional security systems was insufficient to prevent the spread of attacks from server to server within the data center.
On the other hand, we have micro-segmentation, which starts with the premise that change is inevitable and provides the capabilities for persistent, ubiquitous, extensible security regardless of circumstance. No one can guarantee that micro-segmentation would have prevented every recent breach, but I can argue that the obstacles to deploying fine-grained security in the data center go away with micro-segmentation.
What's at stake is the security of today's data centers, as well as the ability for security administrators to defend against breaches. Micro-segmentation is an intrinsic capability of a better architecture, and not merely the feature of some extra security product to be added to the data center.