Organizations are adopting an app-centric approach to computing in their data centers and clouds. Microservices architectures are increasingly used by app-centric enterprises to achieve continuous development and delivery, scaling, and isolation through independent services. While microservices applications offer several advantages compared to monolithic applications, challenges with supporting application services remain. For example, traditional appliance-based application delivery and services solutions cannot support the vast amount of east-west interactions between the services and offer no visibility to application components and their interactions. These application delivery controllers (ADCs) were not designed for dynamic environments where change is constant, automation is a must, and self-service for developers is expected. Application developers require two main capabilities: (a) they need flexibility and programmability to develop, test and deploy their apps quickly; (b) they need visibility into application interactions to enforce the required security posture and pinpoint the specific service that caused an application outage.
Achieving agility is a paramount importance for enterprise IT today and one key area many companies are turning to for this is deploying higher degrees of network automation. This has led to the rise of software-defined networking (SDN), which is designed to drive automation and programmability network-wide. Cisco, the undisputed leader in networking today, has seized on this important market transition with its unique, application-centric approach namely, appropriately enough, the Cisco Application Centric Infrastructure (ACI). Cisco ACI has seen significant traction spurring our joint (Cisco and Avi Networks) customers to request deeper levels of integration between our respective solutions.
Load balancers and application delivery controllers have one critical job. No, it's not distributing clients across servers, though that is an important aspect of their job. At its core, a load balancer's task is to reduce risk. One of the most common vectors for the introduction of risk is the complexity of a system, such as a legacy load balancer.
Take an example of a jet engine. It is comprised of numerous components, each adding its own complexity and potential for failure. By taking advantage of new technologies, such as 3D printing, GE has been able to reduce their jet engine's 25 part fuel injection nozzle down to a single part. This helps reduce cost, time to market, and complexity from the overall system, which also improves the reliability of the engine. At its core, it reduces risk. In the context of application delivery and load balancing, what if a single button can guarantee optimal SSL security settings or maximize application acceleration?
As some of you may be aware, a major security breach was reported at a well-known multinational company - we'll refer to them as Company X - on November 24, 2014. In the breach, their servers’ private keys and SSH keys were stolen. Among others, using the stolen keys, the attacker(s) can attempt to decrypt confidential data they may have collected in the past. This thought and my professional instinct led me to take a close look at some of their secure websites.
With any disruptive innovation, there will always be innovators and early adopters who eagerly jump on the bandwagon. Today, we’re seeing more and more businesses move to the cloud as users take advantage of its utility-based model and associated economic benefits. So why isn’t everyone “crossing the chasm” and what will it take to persuade the late majority and laggards to move to the cloud?
There is no doubt that on-demand cloud services are gaining ground among corporate users of all sizes. Despite the obvious promises of greater flexibility and efficiency, the transition to cloud is incomplete until we can solve a few thorny issues.