In the last couple of years, we have seen a rapid shift in application architecture to a distributed microservices architecture—where monolithic and a bulky applications are decomposed into smaller individual services that can be independently changed, built, deployed and managed. The main advantages of this model are simplicity and speed along with ease of upgrade and independent scaling with minimal or no dependencies on other services. This fits in well with Agile and DevOps methodology, that has been successfully adopted by many web scale companies. Most of these companies have this model working fairly well for several years, but the two biggest initiatives that made it to the mass market in the last few years has to be Docker and Kubernetes. Docker simplified building these microservices as Linux containers and Kubernetes helped to deploy, manage, and scale these microservices in a resource-optimized manner.
Application Architecture Evolution
In this blog, we aren’t going to spend much time on discussing the advantages of microservices architecture. Instead, I'll focus on the major shift towards moving to a cloud native architecture using microservices as a building block.
While microservices architecture offers flexibility, it also comes with complexity. Kubernetes does a good job at deploying and managing these microservices, but you need much more than to operate a cloud native production-grade application— be it service discovery, security, traffic management, and deeper insights. Insight is especially critical in this complex world where hundreds or even thousands of services are talking to each other and getting deleted, spawned, scaled, and updated often.
Challenges in Microservices Architecture
This scale and dynamism brought a huge shift into how infrastructure running monolithic appliances and applications were managed. This new generation architecture needed a whole bunch of new technologies to be a part of this ecosystem to support this dynamic environment. It seemed like we needed multiple solutions at every level of the infrastructure stack to deliver on all use cases. Depending on the need, the infrastructure folks started integrating these technologies into the platform and that would also mean additional burden on the application developers to support the same.
Infrastructure Stack Level View
This is NOT what folks had signed up for and definitely not the promise of agility and ease of development and deployment that microservices architecture had promised.
Then came the concept of service mesh, something which Avi Networks has been focused on delivering to its customers even before the term was coined and made a de-facto standard by open source projects like Istio and Linkerd. We are very pleased to see how service mesh is embraced by the community and seen as something that is a MUST if you are running a microservices infrastructure.
So what is a “Service Mesh” and how does it help in solving these issues? It is essentially providing the services of multiple layers of the stack as shown above in a single infrastructure layer, without application developers needing to worry about integrating or modifying the code to utilize these services. Not only does it make communication between the services reliable and fast, but service mesh also provides granular traffic management, failure recovery, security (encryption, authorization and authentication), and observability (e.g. tracing, logging, monitoring). All this is abstracted from the developers using an architecture where the communication between all the services happens through a sidecar proxy, which sits alongside each service—hence creating a service mesh. These sidecars are managed and configured by the centralized control plane for traffic routing and policy enforcement. Even though running as many more sidecar containers as there are application containers has been a point of debate, the benefits and capabilities of service mesh seem to outweigh the operational issue.
For the rest of this blog series, I would like to deep-dive into how a service mesh is implemented and take you through the journey using the reference architecture of Istio as it is one of the most widely used and best-known service mesh solutions available today. But, does Istio solve everything and is it in its current form complete in terms of handling the important use cases that exist in the microservices world today? We will deep-dive into that and discuss all of it in the coming parts of this series. Stay tuned!