Application Delivery Blog

Controller vs. Instance Manager - Why You Should Look Under the Hood of your Load Balancer

Chris Heggem
Posted on Mar 28, 2019 9:38:22 AM

If you work with load balancers, you’ve probably heard the term “controller” lately. Enterprises have had to significantly increase their load balancer footprint to keep up with the growing number of applications. The controller is the way to automate the management and lifecycle of load balancers across data centers and clouds. But here’s the problem, load balancing vendors like F5 Networks, Citrix, and NGINX do not have a controller. They have an instance manager. Let’s explore the difference.

The architecture behind these hardware and software load balancing appliances was developed in the late 90s and early 2000s — before cloud and before containers. Each appliance has a combined control plane and data plane. The control plane is where you can regulate each individual appliance’s traffic management, security, and policy functions. And the data plane simply carries the traffic.

load balancer architecture using control plane and data plane per appliance

Most software load balancers share this same architecture, with a control plane and data plane unique to each appliance.

Hardware appliance vs software appliance load balancer controllers

Say, for example, your organization has 50 load balancer appliances (hardware or virtual, doesn’t matter) in your data centers and clouds, there are 50 control planes that need to be individually managed. As you can imagine, this is a bit of a headache. So load balancing vendors bolted on an instance manager to help manage their load balancers. F5 has BIG-IQ, Citrix has ADM, and NGINX has Controller 2.0.

Load Balancer Instance Manager Architecture

These instance managers essentially let you SSH or connect into each appliance and configure traffic management, security, and policy commands. The control plane still resides on each individual appliance. And you (the administrator) are still the brains of the operation. Not much has really changed. For instance, you still have the same mental gymnastics where you have to ask yourself questions like, “Do I have a load balancer deployed in this environment? Does this load balancer have capacity? Is this load balancer configured to work with the instance manager?” If you don’t already have a load balancer deployed (or need to deploy a new one), the instance manager can’t help you. You have to manually deploy the load balancer to the environment, identify the host to install it on, and configure it through the instance manager.

If you are familiar with the “pets vs. cattle” analogy, appliance load balancers are pets. They need to be hand fed and cared for regardless of whether you have traffic to justify their existence. The instance manager simply helps manage the care and hand-feeding of each appliance from a centralized dashboard. And it is still a lot of work.

Avi Networks was born in the cloud and container age, though our advancements can be easily applied to traditional applications on virtual and bare metal environments. Avi is different as it has separated the control plane entirely from the load balancer. Our load balancer, called a Service Engine, resides in the data plane and takes all commands from a centralized brain for application services across all your environments. That brain is the Avi Controller. And it is called as such since it works just like an SDN controller or the Kubernetes Controller.

Unlike instance managers which can only manage pre-existing resources, the Avi Controller can spin up new Service Engines and leverage machine learning to predictively react to changes to application and network context. Remember the mental gymnastics I referred to earlier? All that is gone now. Don’t have a load balancer in the environment? The Avi Controller will deploy one for you. You don’t even have to tell it where and on which host. Does the load balancer have capacity? Doesn’t matter. Again, the Avi Controller will scale up and down to meet your needs.

Avi Networks Service Engine load balancing controller architecture

In the pets vs. cattle analogy, Avi’s Service Engines are cattle. You won’t give much thought about managing them. That’s the Avi Controller’s job. Avi is an active load balancing fabric managed by our Controller. There are no active-standby pairs or over-provisioned load balancers waiting for use. And if there is a failure, the Avi Controller self-heals to maintain high availability by ensuring that applications get the services they need based on intent, not available instances. The Avi Controller (not the fact that it is software) is what makes us unique and ruthlessly efficient.

The load balancing industry has been dormant for decades, but advancements in application architecture and multi-cloud environments are forcing enterprises to re-evaluate their load balancing providers. The fact that many vendors are claiming to have controllers is a recognition of customers' needs for this architectural model that supports modern applications. But many of these products don't truly measure up. If your vendor claims to have a controller ask them the following simple questions:

  1. Can your controller automatically place a virtual service on any load balancer and plumb the connections to the pool servers in any data center or cloud?
  2. Can the controller automatically heal/recover load balancers from a catastrophic failure without the need for an active-standby pair of appliances?
  3. Can it automatically scale up or scale down load balancing capacity based on real-time traffic patterns?

Without this functionality, your load balancers won’t be able to keep pace with your business.
  
New Call-to-action

Subscribe to Email Updates

Recent Posts

Posts by Topic

see all