Software-Defined Application Services Tech Blog

Kubernetes and OpenShift Networking Primer

avatar Roberto Casula
Posted on Dec 20, 2017 1:39:30 PM

Networking in Docker

Docker's default networking model (on Linux) is based on local host bridging via a native Linux bridge (usually called docker0), with each Docker container being assigned a virtual interface connected to the bridge and mapped (via Linux namespaces) to a local eth0 interface in the container which is assigned an IP address from the bridge's subnet.

Read More

Topics: kubernetes, openshift

Application Routing Engine | The Speed and Flexibility of Avi

Abhi Joglekar
Posted on Dec 7, 2016 8:01:00 AM

In an earlier blog, “Avi Vantage: A Cloud-Scale Distributed Software Load Balancer For Everyone”, we had described the high-level architecture of the Avi Vantage platform. The platform consists of a clustered centralized Controller, a scale-out distributed Layer-7 Reverse Proxy data path called as Service Engine (SE), a Visibility/Analytics engine, a RESTful interface to the Controller that enables integration with external orchestration engines, and a fast and responsive HTML5 UI.  It is designed ground up to be a modern cloud-scale Application Delivery Controller (ADC) that enables application deployment across any cloud - private, public, or hybrid.

Read More

Topics: ADC, cloud, networking, HTTP, Routing, Application Services, application delivery, Lua, Scripts, load balancer, Load Balancing, Application Delivery Controller, Application Routing, Content Routing, Content-Based Routing

Intelligent Autoscaling | Application Performance Monitoring with Avi

Gaurav Rastogi
Posted on Nov 23, 2016 8:22:00 AM

Not very long ago, one of our co-founders wrote a post on the million-dollar question in the enterprise networking world.  In that post, Ranga discussed how hardware load balancers cannot scale elastically, which is why even web-scale companies such as Facebook and Google leverage software load balancers for elastic autoscaling to match traffic requirements. 

Read More

Topics: autoscale, Autoscaling, Application Performance Monitoring, metrics, APM

Hardware Security Modules | Integrating with Avi Vantage Platform

Chintan Thakker
Posted on Oct 4, 2016 9:05:00 AM

Secure communication is central to today’s web applications. Communication is secured by encrypting the data that flows over the network. To ensure adequate performance, encryption and decryption operations are done using the same key. This is called symmetric key encryption.

Read More

Topics: Security, HSM, Hardware Security Module, SSL

Python Best Practices | Reduce Network Vulnerabilities with Avi

Sandeep Yadav
Posted on Sep 23, 2016 11:14:39 AM

At Avi Networks, scalability, security, automation, and self-service are part of our core objectives to develop a world-class product that stands up to the requirements of the most demanding production environments. As with any service exposed to the Internet, network attacks exploiting vulnerabilities can put proxied assets at an enormous risk. Such risks include but are not limited to the attacker taking full control of the victim network, accessing intellectual property, taking over resident hosts as zombies Distributed Denial of Service (DDoS) attacks, and more.

Read More

Topics: python, python best practices, Shell, Security

Multi Cloud Deployment | Cloudbursting and Hybrid Clouds

Jason Price
Posted on May 10, 2016 8:30:00 AM

How many times do you hear the word “cloud” on a daily basis?  What does it mean to you?  Despite becoming a predominant buzzword, “cloud” means different things to different people, leading to confusion in many conversations.  In talking with a multitude of customers, it’s becoming clear that many people view cloud computing as something that happens solely off-site, residing entirely at an external hosting provider.  This assumption needs to be clarified, as cloud computing is meant to define a genre of architecture and operations, rather than just defining a location.  The operational model is centered around consumption-based on-demand resources, automated workloads, and self-service provisioning – all of which can live entirely of-premises (public cloud), entirely on-premises (private cloud), or a combination thereof (hybrid cloud).  Avi Networks brings these concepts to the load balancing space, easing the transition to cloud-like environments. 

Read More

Topics: autoscale, multicloud, scaleout

Avi Networks Architecture: Virtual ADC Benefits

avatar Marius Sandbu
Posted on Apr 27, 2016 8:30:00 AM

This is a guest blog post by Marius Sandbu, Senior Systems Engineer at Exclusive Networks.  This post was originally published on

Read More

Topics: Avi Networks, Architecture, autoscale, Nutanix Acropolis

Screaming Metrics | Application Performance Monitoring by Avi

Gaurav Rastogi
Posted on Apr 12, 2016 6:30:00 AM

As Avi Networks set out to build the next generation of software load balancers, we wanted them to be optimized and smart.  An important aspect that we considered was to use multiple analyses to understand and automate critical decisions that are usually manual, and often made without enough data. 

Read More

Topics: Analytics, End User Experience, Application Performance Monitoring, metrics, Application Insights

Software Load Balancers and Cloud Environments | Avi Networks

Abhi Joglekar
Posted on Apr 5, 2016 6:30:00 AM

 The Hardware Load Balancer Brick Wall

Last month at Networked Systems Design and Implementation (NSDI)  conference, Google lifted the covers off Maglev, their distributed network software load balancer (LB) [1]. Since 2008, Maglev has been handling traffic for core Google services like Search and Gmail. Not surprisingly, it's also the load balancer that powers Google Compute Engine and enables it to serve a million requests per sec without any cache pre-warming [2]. Impressive? Absolutely! If you have been following application delivery in the era of cloud, say over last 6 years, you would have noticed another significant announcement at Sigcomm ‘13 by the Microsoft Azure networking team. Azure runs critical services such as blob, table, and relational storage on Ananta [3], its home-grown cloud scale software load balancer on commodity x86, instead of running it on more traditional hardware load balancers. Both Google and Microsoft ran headlong into what can be best described as the hardware LB brick wall, albeit at different times and along different paths in their cloud evolution. For Google, it started circa 2008 when the traffic and flexibility needs for their exponentially growing services and applications went beyond the capability of hardware LBs. For Azure, it was circa 2011, when the exponential growth of their public cloud led to the realization that hardware LBs do not scale and forced them to build their own software variant.

So, what is this “hardware LB brick wall” that these web-scale companies ran into?

Read More

Topics: ADC, SDN, Closed-Loop Application Delivery, Architecture, SSL, Analytics, Application Delivery Controller, Microservices, metrics, Software Load Balancer

Application Health | Avi Health Score Defines Your App Health

Gaurav Rastogi
Posted on Mar 16, 2016 6:00:00 AM

Whether it is a water-cooler conversation about the latest wearable health monitor or the current cautions from the CDC about the Zika virus, health may easily rank as one of the most talked about topics in our daily lives.  As a technologist, I am part of a number of conversations about a different kind of health - application health - which is as top-of-the-mind concern for enterprise application developers and administrators. The discussion of human health always evokes passionate debates - it turns out that this was no different with application health. 

This was the case at Avi Networks when we asked a simple question - how do admins know that applications are in "good" health?  I don't believe we had more meetings and debates about any other topic as much as we had about application health.  In this blog post, I will take you through some of those passionate yet fascinating discussions that led to the creation of the Avi Health Score - a key capability of the Avi Vantage Platform.

The team had people with diverse backgrounds so we asked everyone the same question - "What does application health mean to you?". Here is a sample of the responses we received:

"Health is how much throughput my application can deliver. If it is doing 10Gbps that means it is good"

"Health is bad when CPU and memory are above 100%."

"Health is good when latency is below 100ms."

"Health is good if the application is up and responding to the health checks."

In the real world, if I ask you, "Do you believe I am in good health if I ran 3 miles today?", depending upon who you are you will likely respond with "it depends"; "of course!"; "did you run just today or do you run every day?"; or "what was your heart rate and vitals after the run?" You will have a whole lot of follow-up questions to dig into the details.  To put this in perspective, tennis champ Roger Federer would likely win in straight sets against most people even if he were running a fever.  Would that make him healthy?  Of course not!

As you can see just a simple data point of a 3-mile run is not enough for a doctor to give a certificate of good health. Similarly, if you think you can determine a server's health based on the simple fact that it can handle a throughput of 10Gbps, you know you are probably wrong.  It was hard for me to come to terms with this especially given the fact that I had spent most of my career prior to Avi Networks in a hardware company where it was normal to consider that networking hardware is healthy when a link is up and pumping at a bandwidth of 10Gbps. 

Applying Lessons from Human Health

Read More

Topics: Application Delivery Controller, healthscore, metrics

Subscribe to Email Updates

Recent Posts