The cloud has become a standard operating practice for companies as it brings the necessary levels of agility required for digital transformation, be it in private cloud environments or public cloud providers. It is essential to understand that the cloud operating model can only achieve the level of agility required to meet the demands of the business if every infrastructure component has been fully modernized. Cloud architects I've talked to have echoed a sentiment that goes, "You're only as agile as your least agile component," which, in many cases, is the legacy hardware load balancer.
Load balancers are a core component of the infrastructure as they provide a range of critical services, such as SSL offload, caching, application monitoring, application security, and, obviously, local and global server load balancing. Legacy load balancers are tried and true infrastructure and have met the needs of the operations team for traditional applications previously. However, they do not meet the needs of the cloud operating model and modern container-based applications.
Some of the legacy load balancer vendors have virtualized their hardware appliance products to enable them to run in the cloud, but their architecture has not changed since the 1990s. While one might think this would work, it’s essentially a “lift and shift” of code, so the virtual, cloud-resident version would have the same limitations as a physical appliance. This approach also forced customers to deploy siloed disparate products per environment, complicating management, scaling, and making consistent policies almost impossible. What’s required is a modernized software-defined load balancer built for cloud operating models, like VMware NSX Advanced Load Balancer (NSX ALB).
Modernized software-defined load balancers have the following benefits over their legacy counterparts.
Enabling faster application deployment times with a load balancer
When a new application is deployed, the operations team needs to spin up infrastructure like servers and storage and open the tickets to provision Virtual IPs (VIPs) for load balancers. Legacy load balancers, even virtual ones, can take days and weeks to provision VIPs. A recent ZK Research survey found that 77% of respondents stated that traditional ADCs, aka load balancers, created a minor or significant delay in application rollouts. Software-defined load balancers can typically be spun up in just a matter of minutes and are built for scale-out architecture.
Also, one of the benefits the cloud operating model brings is the ability to auto-scale capacity on general-purpose hardware. Legacy ADCs typically have tightly integrated hardware and software and often require a forklift upgrade to improve performance. Another issue that can impact application performance times is turning on advanced capabilities like web application firewalls.
This can cause an application to run sub-optimally until the hardware can be refreshed. In contrast, NSX ALB can automatically spin up new Service Engine instances to address additional application and traffic needs while keeping consistent policies due to centralized control and management plane.
Modern load balancers like VMware NSX ALB are designed for hyper-automation and come with many advanced features such as least connection load balancing, HTTP redirects, and content switching are turned on by default or are easy to enable through intuitive UI. Legacy products generally require complex and tedious TCL scripts to be written for even basic features for every virtual IP, which leads to human error and lengthy change management times.
VMware NSX ALB easily integrates with other infrastructure and gets IP addresses from IP addressing (IPAM) systems, auto-registers with DNS servers, automatically acquires SSL certificates, and dynamically updates firewalls. With legacy load balancers, each function is done by separate teams, and the application owners would typically open a support ticket for each. Some of these tasks can be done quickly, but the process does introduce a significant amount of human delay.
Operational consistency for hybrid multi-cloud
In every environment, application architectures and deployment models change, and problems arise. The challenge for IT is how to keep operation consistency for hybrid multi-cloud. One of the most significant benefits of a software-defined load balancer is the separation of control functions from the data plane. This creates a single control plane that can span multiple data centers and clouds, which leads to end-to-end visibility, consistent policies, and centralized control. If a problem or need for changes arises, IT pros can quickly see where it is and make the necessary changes centrally. This also helps to avoid configuration drifts. Legacy load balancers are managed individually as each appliance because they are vertically integrated and monolithic in design. This can increase operational overhead by at least 10x.
Load balancer self-service portal empowers application teams
The self-service capabilities built into modernized load balancers like VMware NSX ALB can reduce new application rollout or change management from days or weeks to minutes. The modern architectures are designed to be multi-tenant, which enables strong and granular roles-based access to reduce the impact radius. This enables application teams to self-service basic tasks like creating a new VIP, adding or removing the server from the server pool, or auto-scale without going through the teams responsible for the load balancers.
In addition, the application teams can monitor the application performance on NSX ALB via a rich set of. In-built analytics and observability feature set around both network and application performance. This allows them to prevent issues and optimize the application experience proactively.
In summary, the basic tenets of the cloud operating model are simplicity, self-service, agility, and scalability. This only works if the entire technology stack that supports that. Any business looking to embrace the cloud must also ensure its network infrastructure, which includes the load balancers, has been fully modernized.
Zeus Kerravala is the founder and principal analyst with ZK Research.
Read his other Network Computing articles here.
Related articles: