BLOG

Application Services 101 | Dodging Microservices Pitfalls

Ranga Rajagopalan
Posted on Apr 14, 2016 8:57:30 AM

Traditional applications were built as static monoliths that were deployed and managed by IT. When a new application had to be deployed, IT would create a DNS entry for the application, allocate a virtual IP (VIP) and configure that VIP on a load balancer for the application to be discovered by other clients. In the best case scenario, this process took about 4-6 weeks. Enterprises have collectively recognized the inefficiencies in this throw-over-wall deployment hand-off of the application from developers to IT operations. There were no alternatives to this model in a data center dominated by purpose-built appliances owned by IT.

In today’s DevOps-driven world, application teams build dynamic microservices and deliver application updates every 1-2 weeks (or even more frequently). Monolithic and n-tier applications have given way to tens or hundreds of microservices, which are typically built by different teams using different languages, leveraging different frameworks. The goal of microservices and container architectures is to automate resource management and focus on achieving continuous integration and deployment (CI/CD) goals.

In a container cluster, you have a group of hosts all of whom work together to provide computation and memory for all your jobs. The orchestration layer manages the entire cluster as though it is a single operating system and enables applications to run in the cluster.

Tier-level Decomposition

In this approach, you break down the tiers of a monolithic application into multiple smaller container instances. For example, one web server may actually consist of five instances of 1 or 2-core web servers, with 4GB or 8GB RAM. The reason to break down a single app into multiple instances is that each instance can just use a small number of cores and a small amount of memory. However, you still require larger capacity from the application which is why you break down a single application into multiple instances of the application and add/remove instances based on capacity requirements for that service.

Service-level Decomposition

In this approach, you break up the application by functionality into smaller services, with each service providing specific functionality. These clusters of services are then deployed and iterated independently. This approach to deploying services is called microservices and has gained popularity among developers and application owners.

As enterprises adopt microservices architectures, several application services need to be considered and addressed for deploying the clusters.  This initiative is like strengthening the armors, abilities, and strategies of "superheroesque" application services that work in concert to tackle your microservices application delivery challenges. 

Service Discovery 

lego_iron_man.jpgOnce you have multiple instances of a microservice application, the primary question to answer is ‘How do I connect to a particular microservice?’ or ‘How does another service consume the service provided by a particular microservice?’  This challenge is commonly known as Service Discovery. There are multiple point products available today to enable Service Discovery, for example, Apache Zookeeper (use the key value store to define a custom protocol for key lookup).

Setting Up for Service Discovery

After the microservices catalog information is initially populated, the next challenge is to ensure that it derives its updates and data from the right sources. With the aforementioned open-source technologies, you need to develop custom logic to trigger an orchestration system such as Mesos/Marathon, Kubernetes or Docker to populate the necessary information for service discovery.

Load Balancing and Service Proxylego_capt_america.png

The next major challenge to address is load balancing application traffic for these microservices applications as they generate a large volume of east-west traffic. Given the open-source pedigree of a lot of microservice applications a common load balancer recommendation that comes up is an open-source solution like HAProxy. As you configure the instances that constitute the microservice, you will realize that you need to rely on open-source scripts to build, populate, and update the configuration. There are certain drawbacks, however, to this architecture: (a) You must ensure these scripts comply with your connectors for HAProxy can scale up (or down) the servers with newer instances of the microservice application (b) You end up with the distributed HAProxy instances performing distributed application health checks, as opposed to a central controller that performs health checks and shares that information with the distributed load balancers and (c) Each HAProxy load balances the east-west traffic independent of other HAProxy instances due to the lack of a shared state across HAProxy instances.

Traffic Handlinglego_hulk.png

You configured load balancers for the app traffic within the microservices cluster. Now you must also ensure traffic routing for the north-south traffic from outside the cluster. You must work with your networking team to possibly set up routing rules to follow traffic from public IP instances to HAProxy instances and ensure that these routes always point to the right HAProxy instance. If the HAProxy instance fails over in a high availability setup, you need to manually update, or write a script to update routing rules to change next hop from one host to the other.

lego_hawkeye.jpg

Visibility

Once you have setup all the application services and deployed your microservices applications, next step is to monitor application traffic and performance. Some of the metrics you need to measure are the number of connections, transactions per second, user types, user behavior, among other such metrics. Some of the solutions that can be considered are Nagios with Graphite/Grafana to build an adapter to collect metrics, statistics and other such information from HAProxy, populate it on say, Nagios, for visual, actionable insights.

 Autoscaling

lego_multipleman.png

The reason containers, microservices, CI/CD initiatives gained popularity is because they automate application and service delivery.  Hence, scaling microservices application should also be automatic and should not require manual intervention. You need insights into traffic patterns to define autoscaling triggers for microservice instances or load balancers to plan for peak usage. Organizations with webscale IT requirements, such as Google, Facebook, Netflix, etc. either have built custom, in-house resource schedulers for autoscaling (up/down). However, not every enterprise is equipped to dedicate internal IT resources for such custom development.

Security

lego_thor.jpgSecuring microservices applications deserves thoughtful consideration since conventional security technologies do not directly address their needs. Consider a set of microservices applications such as Catalog, Shopping Cart, Credit Card Info, and Customer Reviews. It is only logical for Catalog, Shopping Cart, and Customer Reviews microservices to interact with each other, however, Customer Reviews microservice does not need to interact with Credit Card Info service. So now you must embed such security rules to setup the correct inter-service interactions. When a monolithic application is split up into tens or hundreds of microservices, security quickly gets complicated (and can easily go out of control). You must either rely on your security team, or custom build an adapter for Linux iptables on HAProxy hosts to configure and manage security.

Troubleshooting

lego_black_widow.png

Automation objectives must permeate troubleshooting mechanisms as well.  Application owners need granular visibility to pinpoint a specific microservice or application service that caused the outage. You will again need to rely on the combination of Nagios with Graphite/Grafana.

  

You just got done developing your application, but to provision and deploy it you begin assembling the "army" of services. There are many open-source solutions and DIY options out there (that probably look like the Lego bins shown here) . Webscale IT companies such as Google, Facebook, Netflix, etc. have embraced open-source technologies and have dedicated teams to customize and assemble the blocks together. However, not every organization has the ability to dedicate their IT team’s resources to custom develop an army of application services.

lego_bins.jpg

Avi Networks saw an opportunity to develop a framework for application services that collectively addresses these needs. We had already created an architecture that decomposed service delivery into a separate distributed data plane, while keeping the control point central. This made it possible to extend the capability to microservices applications due to the architectural parity. If your enterprise cannot afford your IT team’s time to be spent on custom development, or if you would rather have one solution that simplifies your microservices deployment, try Avi today! 

lego_avengers_assembled.jpeg

Avi provides 1-click installation, 1-click upgrade, granular visibility into application traffic and performance, scalability, predictive autoscaling, L4-L7 security, and distributed load balancers with central management. We have also created an easy way for you to get started with your container-based application on Amazon AWS along with an online knowledge base to help you and a forum (monitored by our engineers) to ask questions.

Have you deployed microservices applications in your organizations? What are your experiences? Do you have any gotchas to share?  Tweet us @AviNetworks!

PS: Avi offers free licenses for dev/test environments.

PS2: Avi offers free dev/test/prod licenses for startups with <100 employees.

Topics: Microservices, Application Services, App Services 101, Application Services 101

  
New Call-to-action

Subscribe to Email Updates

Recent Posts

Posts by Topic

see all