Application Delivery Blog

Configuring Avi Cloud Using Kubernetes Job

Bhushan Pai
Posted on Nov 14, 2018 11:38:13 AM

Avi Vantage provides enterprise grade load balancing for container ecosystems like Kubernetes. To install Avi on Kubernetes, a Cloud needs to be configured on the Avi controller, which allows it to run APIs to the Kubernetes primary to automate service discovery and load balancing.

This can be a one-time, manual step if the kubernetes cluster is long- lived. But for cases where the setups are tied into a CI/CD framework, whole clusters may be created and destroyed frequently along with the apps being tested. This requires an automated way to configure Avi every time a Kubernetes cluster is created.

Avi provides a suite of automation options like a Python SDK and Ansible to enable developers to add Avi configuration to their CI/CD pipeline. This article explores one such option, where we run an Ansible playbook as a Kubernetes Job to gather details about the cluster its running on and configure a Kubernetes Cloud on a designated Avi Controller to manage it. 

The kubernetes object specs can be found in avinetworks/devops github repository.

Components and Workings of the Job



As shown above the task of configuring a Cloud on Avi is achieved by creating a Kubernetes Job and providing it with the necessary configurations. The Job creates a Pod utilizing the avinetworks/avitools image, which contains Ansible and all other dependencies installed in it.

The Ansible playbooks are mounted to the Pod by adding ConfigMaps to the Job. The cloudinit-configmap contains the Ansible tasks, while the aviansible-configmap contains setup specific variables and Avi objects (networks, IPAM/DNS profiles, cloud config, etc).

The username and password for the Avi Controller REST API are passed as environment variables using avicontroller-secret

The Avi ServiceAccount is added to the Job definition so that the service account Token associated with it gets mounted on the Pod and can be used for cloud config. The avirole ClusterRole defines the level of access Avi Controllers get on the Kubernetes cluster. Check this KB for more details: .

Steps to create the Job and related objects

* Run script to create avi controller secret yaml. It encodes the cleartext passord to base64:

./ <username> <password>


* Create the secret on Kubernetes:

kubectl create -f avicontroller-secret.yml


* Edit cloudinit-configmap.yml and aviansible-configmap.yml


* Create the configmap objects on Kubernetes:

kubectl create -f aviansible-configmap.yml

kubectl create -f cloudinit-configmap.yml


* Create Avi serviceaccount, role and rolebinding. This is used by the Avi cloud connector:

kubectl create -f avi_sa.yml


* Create the cloud-init job:

kubectl create -f cloudinit-job.yml

Details about the Ansible playbook

The cloudinit-configmap.yml file contains two files avi_vars.yml and aviconfig.yml. The aviansible-configmap.yml file contains one file main.yml. These ansible playbook files get mounted on the avitools Pod. The file hierarchy looks as shown:



|-- conf

| |-- avi_vars.yml

| `-- aviconfig.yml

|-- main.yml

The avi_vars.yml contains variables (IP addresses, object IDs etc.) which need to be updated according to the setup, before the config map is created. The variables and their description is as given below:


Variable Description
cloud_name Avi cloud name (unique per cluster)
avi_controller Avi Controller IP address
version Avi Controller version
registry_url Registry URL to pull SE image
license_type LIC_HOSTS / LIC_HOSTS
ns_subnet_id AWS subnet ID on which k8s nodes are deployed
region AWS region ID where k8s cluster is created
vpc_id AWS VPC ID in which k8s cluster is created
availability_zone AWS AZ ID in which k8s cluster is created
iam_assume_role ID of Assume role if controller is in separate VPC
proxy_host Settings to reach AWS API through proxy
master_nodes  K8s API endpoints (add multiple in the list if primaries are not load balanced)
subdomain Subdomain for assigning application FQDN
ns_attribute_key Key - value pair to identify Ingress nodes. 


If the tasks and configurations remain same, only the variables in this file will need change for each cluster, other files will remain unchanged. 

The aviconfig.yml contains the configurations for various objects in Avi, like networks, IPAM and DNS profiles, and cloud. Here we leverage the aviconfig ansible role to specify all objects in the same yaml which is consumed by the task in main.yml. The role handles the order of creation of these objects on Avi. For more information on aviconfig role check the github repo. 

The main.yml contains the main ansible playbook which gets executed in the Pod. It references the variables from avi_vars.yml and configuration from aviconfig.yml.

New Call-to-action

Subscribe to Email Updates

Recent Posts

Posts by Topic

see all