Scaling Load Balancers | Myth Busters: Scaling Can Take Weeks

Grant Swanson
Posted on Feb 12, 2016 9:20:53 AM

A common challenge that most enterprises are facing today is their ability to scale out applications. From many vendors of application services we often hear a lot of buzzwords like elasticity, autoscale, and cloud bursting but the fundamental problem of knowing the traffic or transaction threshold beyond which applications and their load balancers need to be scaled, still remains. You may be wondering how that can be possible considering all of the advances in technology over the past few years. The reason, it turns out is that load balancing appliances that promise those buzzwords are really not equipped to deliver those capabilities. They are built on proprietary hardware (not portable across environments) and while they sit in the data path, they do not have transaction analytics that enable them to make intelligent decisions on application performance. These legacy Application Delivery Controllers (ADCs) and load-balancing appliances are static and slow down operations.


Setting up hardware load balancers is complex and time consuming. Configuration requires fairly deep knowledge of TCP/IP networking principles, as well as the ability to understand proprietary concepts associated with the load-balancing hardware itself. It is no wonder that many network administrators are reluctant to change their load balancing solution - they have painstakingly become familiar with their current solution. With these solutions, you become caught in the daily grind of security and administration issues, just as you would for any other box on your network. The time taken from a request submitted by the application development team until you have the necessary infrastructure in place to scale out the application could be in the order of 3-4 weeks. 

Load-balancing hardware also creates the single-point-of-failure problem. To alleviate this issue, most load-balancing hardware manufacturers recommend that you purchase two boxes and set them up as a high-availability (HA) pair so that the second can seamlessly take over for the first in case of failure. Overprovisioning is also the norm because dynamically scaling on-demand is simply not possible. It's also the only way that you can protect yourself agianst unpredictable spikes or bursts in traffic. This leads to overspending on a legacy solution that inefficient, hard to manage and incredibly expensive. We have seen situations in the field where folks who are unwilling to change from this model, risk losing their jobs.


Network Architects who are truly building an evolved infrastructure require a software load balancer that can elastically scale on demand or with a click of a button. This type of innovation is not only available now, it is rapidly being adopted by many F500 companies.

We can show you how to provision a load balancer in less than 60 seconds.  Select the deployment environment (private or public cloud), configure the VIP and select the pool members. An intelligent controller then spins up micro-ADCs, places them in the appropriate network, and configures the policies automatically. Watch this two minute video to see for yourself:


If you are interested in learning more about Avi Networks, I encourage you to visit our resource center or join us on our weekly demo.


Topics: Load Balancing, autoscale

New Call-to-action

Subscribe to Email Updates

Recent Posts

Posts by Topic

see all