The rise of cloud computing and containerized microservices have drastically changed the way applications are designed and deployed. Unfortunately, these changes have made management and orchestration of applications more complex. To ease this complexity, service meshes were developed.
In this article, you’ll learn what microservices are and how meshes can enhance your cloud-native applications. You’ll also learn some key information for successfully adopting a service mesh for your application deployments.
What Are Service Meshes?
Service meshes are specialized communication layers you can use to manage communications between your services. You run a service mesh on top of your request/response layer and any Container Networking Interface (CNI) you may be using. The service mesh adds functionality but is not a replacement for your CNI or your orchestration platform.
Service meshes were designed to improve orchestration of containerized microservices. You can use meshes with any container system or orchestration tool via a custom mesh or a pre-designed one. Pre-designed tools tend to favor service meshes for Kubernetes, as it is the most popular orchestration tool.
How Service Meshes Can Enhance Cloud-Native Apps
There are three main aspects of your cloud-native applications you can enhance using service meshes.
Observability
Without a service mesh, you have limited visibility into the status of your various services and intercommunications. This limitation makes debugging or ensuring availability difficult or impossible.
With a mesh, you can connect the various distributed microservices that compose your applications. It can centralize communication between these services and enable you to easily route logging and metrics data to a unified source.
You can also use a mesh to implement tracing, which allows you to track exactly where and when issues arise. Tracing enables you to evaluate requests and responses on the packet level. It allows you to monitor protocol inconsistencies and analyze communication performance.
Traffic Control
Managing traffic in cloud-native applications can be easier due to the elasticity and scalability of the cloud. However, these functionalities alone typically don’t allow granulated control.
With a service mesh, you can easily create latency-aware load balancing policies. You can also control traffic request patterns like retries, timeouts, or rate-limiting. This fine-grained traffic control is useful for enforcing client traffic limits and moderating resource use.
Services meshes can also enable you to more easily perform A/B testing and shift traffic between release versions. You can define traffic policies that split traffic between versions or gradually move traffic from one version to another.
Security
Managing security in containers can be challenging. Containers provide service isolation and granular access control but the division into microservices creates a bigger attack surface area.
A service mesh can reduce risks related to surface area by enabling zero-trust security practices. Service meshes do this through better authentication of requests, consistent enforcement of security policies, and encryption of inter-service traffic. A service mesh can also be used for certificate management, including distribution, rotation, and revocation.
The combination of robust security controls with increased observability enables you to more quickly and accurately respond to security threats. You can further boost this security by taking advantage of traffic controls during resilience or penetration testing.
Including Service Meshes in Your Deployments
Adding a service mesh to your deployments takes a little planning. You need to understand what architecture options are available to you and how you can add a service mesh to your app deployments.
Architecture Options
There are three main architecture options for deploying a service mesh.
- Library—based on a library that is imported into your application code and runs from within your app. This method enables you to more easily allocate resources, because all work is contained within your host service. It works well for in-house services based on a single programming language.
- Node agent—based on a node agent or daemon that serves all containers on a host node. You must coordinate agents with your infrastructure processes to use this method, making it more complex to deploy. However, it is easy to scale since it enables resource sharing. The node agent method works well for both in-house and third-party applications.
- Sidecar Proxy—based on sidecars attached to your containers. Sidecars handle traffic into and out of the host container. This method does not require infrastructure coordination and enables easy scaling and zero-trust deployments. This is the preferred architecture for most implementations.
Service Mesh Integration
When adding a service mesh to your container deployment you need to evaluate whether a custom or pre-built mesh is more effective. Here are a few integration options to consider:
- Create custom builds using proxies like Envoy, NGINX, or Traefik.
- Use pre-built services such as Istio or Linkerd, which are based on these proxies.
- Use proprietary cloud services, such as AWS App Mesh and Azure Service Fabric Mesh.
The mesh that works best for you depends primarily on how your applications are hosted. If you are using public cloud providers, you are better off with their individual offerings since you know the infrastructure supports that architecture. If you are using a self-hosted Linux environment, you can choose any configuration.
Regardless of the mesh you choose, you should be prepared to perform at least some manual configuration. You need to set up security controls, traffic flow, and telemetry. If you are using Kubernetes, this is typically done with kubectl.
Conclusion
Hopefully, this article helped you understand what service meshes are and how this technology can be used to enhance your cloud-native applications.
Service meshes are actively in development—new use cases and benefits are being developed as the technology gains adoption and popularity. To take advantage of these benefits, consider adopting a service mesh and see how you can customize it to your needs.
Author Bio
Gilad David Maayan is a technology writer who has worked with over 150 technology companies including SAP, Samsung NEXT, NetApp and Imperva, producing technical and thought leadership content that elucidates technical solutions for developers and IT leadership.