My colleague wrote a blog on What is “Intent-Based” Anyway? To see is to believe and that’s exactly what this blog “intends” to do. Let’s put your fingers on something tangible while catching you up on the next big thing that is “intent-based networking”.
Who will be the best tester for your “intent-based” system? Someone who has an interest in the “outcome” of the systems but not necessarily the detailed steps or manual configurations needed to constantly care for the system. To some degree, they don’t have the time nor skill sets to learn the ins and outs of how the system works, they care about what the system can deliver in a highly automated fashion. So we think DevOps might be a perfect example.
This video describes a typical “ticket-free” workflow between the infrastructure and application teams. It starts from a portal where IT admins can choose to give access (via RBAC) to the DevOps teams to provision virtual services on their own instead of consuming manual cycles. As you notice that the information required is very “intent-based” as the end users don’t need to provide detailed configuration parameters such as the size of load balancers, other network properties like IPAM, DNS and SSL certificates. What’s happening in the background is a JSON file is automatically sent to the controller, which automatically instantiates virtual services through a set of cleanly defined RESTful APIs.
Great, it looks powerful and promising. But the real test is when you put the system under stress. Will it really be robust and intelligent enough to respond in the toughest condition? Is the “intent-based” automation really better than intensive manual care? Let’s put it to the test!
Avi has a software-defined architecture which separates the control plane (Avi Controller) and the data plane (Avi Service Engine). This allows Avi to scale out easily on-demand according to the traffic. It can be deployed in two modes:
- Auto: Avi Controller automatically detects CPU surge or other metrics bottlenecks and increases capacity by autoscaling Avi Service Engines. The system decides how many are needed, what size and when to scale back. You don’t need idle load balancers overprovisioned (aka overspent). This is very useful for scenarios like Black Friday, NFL ticket sales, or seasonal business spikes.
- Manual: You can hit one button (yes only one, sorry) to simply scale-out and the system automatically spins up one for you in a few minutes. Although it’s called manual (more accurately it’s an authorized mode), tons of automation is happening in the background. Your focus is on applications, for example, SLAs monitored via the health scores. You simply express an “intent”. Sit back and watch the “outcome”!
It’s difficult to do multi-cloud right, so why do we need it? Because you want choice and flexibility (and you may want to hedge your risks by not putting all your eggs in one basket). Ideally you don’t want siloed solutions intended just for one environment and you want to manage them centrally. Your applications now span multiple data centers, multiple regions in a cloud, and even multiple clouds. Application architecture has evolved to run in different instance types — bare metal, virtual machines and containers. You need a flexible load balancing solution that allows you focus on applications not where and how they are deployed. Great, how do you do that?
The multi-cloud requirement itself calls for a solution that is infrastructure agnostic. This can be achieved by central control and a distributed data plane that can be instantiated anywhere. However, we know it is not sustainable if you need to take care of the details of how different environments operate or implement your solution again when moving into a new environment. That’s when “intent-based” systems are needed. They help create a layer of abstraction that allows you to maintain the higher-level workflows enabling multi-cloud automation, yet handles the differences behind the scene. They provide declarative policies to allow you specify your “intent” where IT generalists, DevOps engineers or application owners can do.
In this video, you simply need to select what cloud types (e.g. VMware, AWS, Azure) you want the applications to be deployed. The next level details such as which servers are needed are automatically handled by the controller talking to the respective environments (e.g. vCenter, Kubernetes/OpenShift primary, Azure Resource Groups). Yes, it’s a paradigm shift where multi-cloud allows you to focus on the application itself, where it happens to be deployed becomes a secondary consideration.