When you enter a query into the Google Search Engine, thousands of containers distributed across a global network spring into action to give you results instantly. That is also how YouTube and Gmail work too.
Google services are powered by a data fabric — an always-on, fully elastic, fully automated system designed to give you the performance and experience you’ve come to expect from Google.
How many data centers, containers, and application services are used to power Google’s products? Just the right amount. I’m not being glib, that’s the actual answer. The teams over at Google don’t think in terms of number of servers, containers, or load balancers they need—only that their applications get exactly what they need when they need it. That’s how a fabric works. It is a software-defined system that automates decision making to spin up, spin down, and scale out resources based on context.
With a software-defined fabric architecture, Google doesn’t have to go through the same mental gymnastics we’re used to in traditional IT, where you constantly monitor capacity of appliances and manually react to changes. If search traffic surges, Google’s backend system automatically spins up all the necessary resources in real-time. If there is an outage in a specific data center, Google’s system will either self-heal or failover to another available resource before users would even notice.
“Fabric” architecture is the answer to the challenges enterprises face with growing demand for IT services. And we’re seeing new platforms and services that leverage fabric architecture. Software-defined data centers and Infrastructure-as-a-Service in the public cloud offer virtualized environments that can automatically scale compute and storage to meet demand. However, a fabric is only as effective as the weakest link in your technology chain.
Google doesn’t use appliance-based application services in its virtualized environments. Each component is modern and software-defined. As Google’s applications scale, so do the underlying infrastructure and application services, like traffic management.
Today, enterprises have infrastructure that can scale with demand yet traffic management, for the most part, is still delivered by hardware or software appliances. IT teams have to over-provision these load balancers to have enough resources available to address peak demand or the application will topple over. If Google used appliances, they would have constant outages for minutes at a time until additional software appliances were deployed — or days at a time until more hardware appliances were deployed. Appliance-based services put enterprises in a catch 22: overpay and over-provision to have enough resources to meet demand, or risk the application’s availability.
Google Cloud load balancer, AWS load balancer, Facebook, and Netflix are no longer appliance-based. In fact, hundreds of enterprises are saying goodbye to hardware and software load balancing appliances. And you should too.
Avi Networks was designed as a load balancing fabric. The Avi Controller serves as the brain for all your load balancing deployments across data centers and clouds. The Avi Controller uses real-time analytics to spin up and down Service Engines (our load balancers) across data centers and clouds so you always have just the right amount. We highlight this in the test we did a couple years ago when we scaled from 0 to 1 million transactions per second in under 10 minutes without breaking a sweat. Avi automatically scaled services with its load balancing fabric.
Hundreds of the world’s largest enterprises are switching to Avi Networks load balancing fabric. How many load balancers do they use? Just the right amount. Just like Google.