Application Delivery Blog

The Honest Avi Networks and NGINX Comparison

avatar Chris Heggem
Posted on Sep 3, 2019 9:17:31 AM

Avi Networks was acquired by VMware in July 2019 and the Avi Vantage Platform is now known as NSX Advanced Load Balancer (ALB). But that hasn’t stopped people from talking about Avi Networks — especially competitor F5 Networks and their recent open-source acquisition NGINX. They have been known to spread misinformation when it comes to Avi Networks and NGINX comparisons of features and capabilities.
Read More

Topics: load balancer, Application Delivery Controller, software-defined load balancing, software load balancing, Multi-cloud Load Balancing

Three Lessons Application Managers Can Learn from the Electric Scooter Invasion

avatar Chris Heggem
Posted on Apr 26, 2018 10:32:22 AM

San Francisco is an uproar over electric scooters. Thanks to startups like Bird, LimeBike and Spin, the motorized two-wheelers invaded San Francisco’s streets, seemingly overnight, and quickly became the Tech Bus issue of 2018. While commuters and tourists enjoy the environmentally friendly transport, residents bemoan broken toes, near-miss collisions and abandoned scooters littering the sidewalk. Meantime, the city legislators are struggling to know how to deal with the new and high-profile roll-out, which apparently has no formal permits.
Read More

Topics: Application Delivery Controller, Load Balancing, Software Load Balancer, Hardware, WAF

Avi Networks Saves Companies Millions In Application Delivery

avatar Chandra Sekar
Posted on Apr 11, 2018 10:32:24 AM

We know companies that use our intent-based application services are happy they chose Avi Networks. But we wanted to find out exactly how much benefit comes from using Avi Networks versus physical or virtual application delivery controllers.
Read More

Topics: Application Delivery Controller, Software Load Balancer, software-defined load balancing, software load balancing, Hardware, Research

2018 is the Year of #NoHardware

avatar Chris Heggem
Posted on Dec 7, 2017 10:18:24 AM

Hardware used to be the foundation of IT. You couldn’t deploy an application without configuring dozens of servers and hardware appliances. But modern enterprises can’t be held back by a walls of knobs any longer. Proprietary physical hardware is expensive, slow, and works counter to your digital transformation.
Read More

Topics: Application Delivery Controller, Software Load Balancer, Hardware

Application Delivery | The Importance of Design

avatar Ted Ranft
Posted on Sep 12, 2017 12:56:40 PM

This week I came across my still functioning iPod Classic “click wheel”. This was my exercise companion for many years and I still marvel at the engineering innovation (holds thousands of songs!) and simple elegance of the intuitive user interface (click wheel!). In today’s consumer society we expect our electronics to be intuitive. When announced in 2003 the “click-wheel” was years ahead at a time most electronics still came with detailed instruction manuals. Launching a consumer product with a click wheel was a radical approach, pushing the end-user experience to the very limits of engineering.
Read More

Topics: Architecture, Application Delivery Controller, Load Balancing, Software Load Balancer, Application Architectures, Elastic Load Balancing

Application Routing Engine | The Speed and Flexibility of Avi

Abhi Joglekar
Posted on Dec 7, 2016 8:01:00 AM

In an earlier blog, “Avi Vantage: A Cloud-Scale Distributed Software Load Balancer For Everyone”, we had described the high-level architecture of the Avi Vantage platform. The platform consists of a clustered centralized Controller, a scale-out distributed Layer-7 Reverse Proxy data path called as Service Engine (SE), a Visibility/Analytics engine, a RESTful interface to the Controller that enables integration with external orchestration engines, and a fast and responsive HTML5 UI. It is designed ground up to be a modern cloud-scale Application Delivery Controller (ADC) that enables application deployment across any cloud - private, public, or hybrid.
Read More

Topics: cloud, load balancer, ADC, application delivery, Application Delivery Controller, Load Balancing, Application Services, networking, HTTP, Routing, Lua, Scripts, Application Routing, Content-Based Routing, Content Routing

Software Load Balancers and Cloud Environments | Avi Networks

Abhi Joglekar
Posted on Apr 5, 2016 6:30:00 AM

The Hardware Load Balancer Brick Wall Last month at Networked Systems Design and Implementation (NSDI) conference, Google lifted the covers off Maglev, their distributed network software load balancer (LB) [1]. Since 2008, Maglev has been handling traffic for core Google services like Search and Gmail. Not surprisingly, it's also the load balancer that powers Google Compute Engine and enables it to serve a million requests per sec without any cache pre-warming [2]. Impressive? Absolutely! If you have been following application delivery in the era of cloud, say over last 6 years, you would have noticed another significant announcement at Sigcomm ‘13 by the Microsoft Azure networking team. Azure runs critical services such as blob, table, and relational storage on Ananta [3], its home-grown cloud scale software load balancer on commodity x86, instead of running it on more traditional hardware load balancers. Both Google and Microsoft ran headlong into what can be best described as “the hardware LB brick wall”, albeit at different times and along different paths in their cloud evolution. For Google, it started circa 2008 when the traffic and flexibility needs for their exponentially growing services and applications went beyond the capability of hardware LBs. For Azure, it was circa 2011, when the exponential growth of their public cloud led to the realization that hardware LBs do not scale and forced them to build their own software variant. So, what is this “hardware LB brick wall” that these web-scale companies ran into?
Read More

Topics: ADC, SDN, Closed-Loop Application Delivery, Architecture, SSL, Analytics, Application Delivery Controller, Microservices, metrics, Software Load Balancer

Application Health | Avi Health Score Defines Your App Health

Gaurav Rastogi
Posted on Mar 16, 2016 6:00:00 AM

Whether it is a water-cooler conversation about the latest wearable health monitor or the current cautions from the CDC about the Zika virus, health may easily rank as one of the most talked about topics in our daily lives. As a technologist, I am part of a number of conversations about a different kind of health - application health - which is as top-of-the-mind concern for enterprise application developers and administrators. The discussion of human health always evokes passionate debates - it turns out that this was no different with application health. This was the case at Avi Networks when we asked a simple question - how do admins know that applications are in "good" health? I don't believe we had more meetings and debates about any other topic as much as we had about application health. In this blog post, I will take you through some of those passionate yet fascinating discussions that led to the creation of the Avi Health Score - a key capability of the Avi Vantage Platform. The team had people with diverse backgrounds so we asked everyone the same question - "What does application health mean to you?". Here is a sample of the responses we received: "Health is how much throughput my application can deliver. If it is doing 10Gbps that means it is good" "Health is bad when CPU and memory are above 100%." "Health is good when latency is below 100ms." "Health is good if the application is up and responding to the health checks." In the real world, if I ask you, "Do you believe I am in good health if I ran 3 miles today?", depending upon who you are you will likely respond with "it depends"; "of course!"; "did you run just today or do you run every day?"; or "what was your heart rate and vitals after the run?" You will have a whole lot of follow-up questions to dig into the details. To put this in perspective, tennis champ Roger Federer would likely win in straight sets against most people even if he were running a fever. Would that make him healthy? Of course not! As you can see just a simple data point of a 3-mile run is not enough for a doctor to give a certificate of good health. Similarly, if you think you can determine a server's health based on the simple fact that it can handle a throughput of 10Gbps, you know you are probably wrong. It was hard for me to come to terms with this especially given the fact that I had spent most of my career prior to Avi Networks in a hardware company where it was normal to consider that networking hardware is healthy when a link is up and pumping at a bandwidth of 10Gbps. Applying Lessons from Human Health
Read More

Topics: Application Delivery Controller, healthscore, metrics

Application Delivery Beyond Load Balancing | Stay Away From Hardware

avatar Swarna Podila
Posted on Mar 4, 2016 11:07:46 AM

It’s been two weeks (since I joined Avi) or four Silicon Valley weeks! And if you imagine a proverbial “firehose”, I think it is safe to say Avi unleashes at least 10x more! So yes, these two weeks have been like a whirlwind for me.
Read More

Topics: OpenStack, ADC, SDN, Avi Networks, Application Delivery Controller, Microservices

Load Balancing | Confessions of a Hardware ADC Salesman

avatar John Huang
Posted on Nov 23, 2015 10:35:01 AM

INTRODUCTION: I have helped enterprises “deliver” their applications for a long time. And somewhere in every conversation with a prospective customer I always asked, “What will your traffic be like in 3-5 years?” Yes, I was sincerely trying to size their network equipment to sell the right load balancers. But, I was also doing my company’s bidding to sell enterprises more gear than they would likely need. I was as guilty of doing it as any of my counterparts at competitive load balancing vendors. It was always good to play up the prospect’s optimism especially tech-heavy Silicon Valley. However, the traffic growth question asked by hardware ADC vendors is impossible to answer by mere mortals and most companies just capitulate and buy enough capacity for 4-5 years down the road. More often than not, when I revisited many of my customers in 12-24 months, I found that gear still running at single digit capacity. The tendency to oversell gear wasn’t an intentional desire to exploit customers, but was driven by the architectural limitations of an inelastic hardware model. Paying upfront for anticipated future growth has been an accepted norm in the IT industry for a long time. Disruptive forklift upgrades are par for the course in the IT world where nobody has a crystal ball for how traffic might outgrow existing equipment.
Read More

Topics: Application Delivery Controller, Load Balancing

  
New Call-to-action

Subscribe to Email Updates

Recent Posts

Posts by Topic

see all