Containerized microservices are taking over the software world. These services provide developers a way to quickly create and deploy cloud-native applications. However, containers aren’t perfect. Orchestrating and managing container deployments can be a serious challenge.
To help ease this challenge, service meshes were developed. In this article, you’ll learn about service meshes components and architecture options. You’ll also learn how services meshes can improve your container deployments.
What Is a Service Mesh?
A service mesh is a dedicated communication layer that sits on top of your request/response layer. It works similarly to a Container Networking Interface (CNI), which controls the connection of your services to your network. A service mesh sits on top of your CNI, adding additional functionality, such as service discovery or security. A service mesh is not a replacement for a CNI or a service orchestration platform.
Services meshes are designed for use with containerized microservices. You can use this technology with any container or container orchestration tools. However, the most active projects seem to favor developing service meshes for Kubernetes.
Service Mesh Architectures
When deploying a service mesh, there are three architectures you can choose from.
Library
The library architecture employs a library that is imported in an application’s code and runs from inside the application. Using a library eliminates the need to call out to a remote service, making it more secure.
Libraries make it easier to allocate resources since all work is contained in the microservice running the library. A library architecture works well for in-house applications that are written in one language. Third-party applications are difficult or impossible to add the library to.
Node Agent
A node agent architecture uses a node agent or daemon that serves all containers on a particular node. A node agent architecture is language-agnostic since it doesn’t run from inside your microservice. It can be used with in-house and third-party applications. However, node agents must be coordinated with infrastructure processes, making this architecture more complex to deploy.
Node agents require less memory than sidecars since information can be shared across a node. This architecture employs work resource sharing, with dynamically shared resources, making it easier to scale. The downside of sharing, however, is that it can be abused, with one service hogging all available memory.
Sidecar Proxy
A sidecar proxy architecture uses sidecars that run alongside your containers. The sidecar handles all traffic into and out of the container it is attached to. Sidecars do not require coordination with your infrastructure since one proxy is assigned to each container. This lack of coordination makes sidecars easier to set up than node agents.
Sidecars only serve one pod and are not influenced by other pods. This isolation makes this architecture better for zero-trust deployments as each container is accountable for its own actions. Sidecars can be added to containers at any time, enabling you to easily scale processes.
Use Cases For Service Meshes
Service meshes are an emerging technology, with new use cases being actively developed. The three use cases introduced here are just the ones that are currently most common.
Improving Security
Containers are isolated environments and generally considered to be more secure than traditional applications. However, containers still contain vulnerabilities, such as possible abuse of root access.
Service meshes can help address security concerns by providing:
- Zero-trust security—any communication, either from inside or outside requires verification. Verification prevents compromised containers from interfering with other microservices or your broader system. It restricts a container’s actions and communications to that container alone.
- Communication tracing—enables you to collect communications information and isolate errors or issues. If a container is communicating in an unexpected way or against protocol, you can identify it. Once identified, you can easily replace the container with an uncorrupted image or remedy the issue. Tracing enables you to pinpoint possible attacks, including where attackers came from and what they’re trying to accomplish.
Improving Delivery
Containers already improve on application and service delivery by making it easier to release and patch services. Service meshes expand upon these capabilities by improving application and performance monitoring functions.
Service meshes can also help improve service delivery by enabling:
- A/B testing—provides advanced routing, enabling you to divide traffic between A and B versions. This routing makes it simpler to set up A/B testing. Additionally, tracing features enable you to gather and analyze user traffic and interactions.
- Rapid versioning—new versions can be released with zero downtime and reduced risk. Containers can be started and verified healthy before traffic is gradually shifted between versions. If issues arise, changes can be rolled back without system-wide consequences. Easier corrections enable you to relax delivery requirements with minimal risk.
Improving Availability
Containers can provide improved availability, depending on your hosting configuration and resources. Service meshes can expand upon these abilities by improving load balancing, traffic management and health monitoring functionalities.
Service meshes can also help improve availability by enabling:
- Resilience testing—enables you to test how services respond with fault injections. Service meshes enable you to test your services’ fault tolerance. Increased understanding of how faults affect resilience helps you identify and correct vulnerabilities. Greater service resilience leads to greater availability.
- Deployment and request shadowing—enables you to simulate anticipated traffic flow and see how services respond. Shadowing enables you to test the traffic and workload capacities of your services. With this information, you can determine whether you need to scale resources and connections. Proper scaling ensures that your traffic needs are met and prevents traffic bottlenecking and decreased availability.
Conclusion
As microservices become standard, teams are placing an increased focus on improving container performance and security. The development of service meshes is a result of this increased focus.
Hopefully, this article helped you understand what service meshes are and how this technology can help improve your container deployments. If you’re interested in seeing how a service mesh might help you, check out this tutorial explaining how to build a service mesh.