In the modern era of hyperscalers, edge computing, and everything-as-a-service, many organizations are increasingly turning to the public cloud to outsource their infrastructure and critical applications. In doing so, they may realize various benefits, including greater agility to deploy new customer-facing applications, flexibility to scale based on seasonal demands, and access to the latest AI/ML services. Likewise, co-location facilities are increasingly being used to host applications that do not require the sophisticated and expensive environments offered by public cloud providers.
The choice to redeploy services often follows strategic digital transformation initiatives. However, when enterprises shift data and applications to third-party hosting environments, they can introduce visibility blind spots that make troubleshooting more difficult. For example, cloud environments are, by design, highly dynamic, and the health of applications can be adversely affected by subtle changes in their environment, especially to connected applications and services upon which they depend.
A 2022 IDC global survey indicates that a top barrier for IT executives to build a resilient digital infrastructure is “insufficient analytics and automation.” Indeed, the nature of external hosting environments is one of less control. Cloud providers may offer guarantees and select metrics regarding performance. Still, at the end of the day, organizations likely do not know precisely where their data is stored or which pathways it must take to get to where it needs to go. This can introduce significant challenges for latency-sensitive applications, for example, if users are located in geographically distant areas.
To overcome this barrier, IT teams must look for ways to extend their visibility and utilize end-to-end monitoring that enables the accurate identification, triage, isolation, and collection of evidence of enterprise application problems. This article will delve into how enterprises can address common blind spots introduced during cloud and co-lo migrations, regain visibility of mission-critical applications, and troubleshoot potential performance problems faster.
Gain insight into application performance before a migration
Any good migration starts with a baseline determination of the performance of relevant on-premises applications. Additionally, it's crucial to understand all of the dependencies of a given application or service, including databases, servers, enablers, APIs, etc., which may actually reside in different domains. That way, nothing is inadvertently left behind or forgotten, and if all else fails, IT teams can revert applications or services back to how they were.
However, this process is no small task for large organizations with complex IT infrastructures built over several generations of hardware and software, not to mention numerous acquisitions and expansions. Also, keep in mind that many organizations have at least one legacy application for which failure or performance degradation is not an option – lest they lose business or suffer reputation loss. For these reasons, the implementation of continuous, end-to-end network monitoring and dependency mapping is ideal and necessary before any migration of services.
Monitor and analyze traffic to and from workloads with packet-level visibility
Many organizations already implement packet monitoring within all or part of their on-premises environments. But when moving resources to co-los, certain traffic types, like when employees connect to applications used by the enterprise using an Internet tunnel that terminates at the co-lo or connections between internal business applications and SaaS service APIs, may be less readily visible. Similarly, depending on the specific public cloud provider, organizations may lose visibility into traffic to and from individual workloads.
That’s why setting up end-to-end monitoring across these environments is so important. Instrumentation points may vary, including the use of virtual implementation as needed, but setting up a consistent measurement approach across hosting environments will enable easier determination of the nature of application/network performance issues and where those issues are located.
Notably, the instrumentation strategy for public cloud migrations should often follow an “outside-in” approach, whereby locations carrying aggregate traffic are instrumented first. If also leveraging connections from a co-lo, then it’s worth considering what visibility can be obtained by instrumenting these connections in the co-lo. Conversely, instrumentation in the public cloud should start with aggregation locations, such as inspection zones that include traffic flowing to/from the Internet and between virtual private clouds (VPCs) in the public cloud availability zones. Deeper visibility can be obtained at locations such as application load-balancers. Finally, cloud-native port mirror capabilities can monitor East-West traffic between individual workloads.
Patching the visibility blind spots
Fifty percent of enterprises have already deployed workloads in the public cloud, with 7% intending to do so this year. As businesses increasingly migrate mission-critical applications or services to colo facilities and the public cloud, their lack of visibility of their cloud infrastructure can increase if not managed properly.
The visibility blind spots IT teams face can create significant hurdles to streamlining operations. But by identifying and approaching these visibility gaps and with a comprehensive, real-time performance monitoring solution, businesses can pinpoint performance issues for speedier time to resolution – no matter where their resources and applications are housed.
Jason Chaffee is the Senior Director of Product & Solutions Marketing at NETSCOUT.
Related articles: