The adoption of Kubernetes as enterprises’ tool of choice is increasing very rapidly. Consequently, the architectural consideration and operational requirements to support production environments have gained increasing attention. A major challenge to consider when planning a scalable microservices architecture (MSA) that is orchestrated by Kubernetes is how to manage the heavy traffic of messages in the system.
After migrating from the monolith architecture to a microservices architecture or setting up a new MSA, each microservice holds its data model and is disengaged from the rest of the system. When scaling, it is observed that a project can grow to hundreds or more microservices, and the messaging traffic can become enormous in systems that generate millions of messages a day. Hence, there is a need for a new robust communication between the microservices to be established.
Using Point-To-Point connectivity such as REST to solve the communication gap in Kubernetes can end up creating a complicated structure between the services. A consequence of this is the need for comprehensive maintenance by the development team each time service requirements are changed. This is where a messaging queue system comes in handy to take care of the messaging challenge in MSA and Kubernetes. Each service communicates with one focal point (message broker queue) in its own language and the message queue system is in charge to deliver the messages to the services which are waiting for it.
To build a well-managed messaging solution for Kubernetes, the message queue system should be native to Kubernetes, support the relevant messaging patterns, robust and secured. Being native to Kubernetes is a critical parameter to achieving a healthy system in Kubernetes. It includes a simple and fast deployment to Kubernetes (and not outside of the Kubernetes cluster), low DevOps maintenance and well connected to Kubernetes ecosystem for logging, monitoring and tracking.
A hybrid cloud solution enables enterprises to make use of both on-premises and public cloud services. An advantage of a hybrid cloud is the flexibility it offers–allowing workloads to alternate between the two environments when capacity and costs change, for example. Sensitive workloads and data can be hosted in the private cloud or on-prem while less critical workloads are hosted in a public cloud. Regulatory requirements for data handling and storage can be provided in the private cloud. A commercial advantage of public cloud services is cost, both in terms of CapEx and OpEx, as the scalability it provides allows an organization to pay only for the resources it uses. Deploying message queue in Kubernetes provides the technology that allows the hybrid clouds to smoothly and transparently connect and interact.
There are many use cases for message queues. The message queue needs to support diversified messaging patterns, enabling flexibility in creating different use cases.
Here are some of the common use cases a messaging queue system should support in Kubernetes.
The synchronous pattern is used when we need to process the messages in a coordinated way in a workflow. Pipeline use case allows processing messages in a sequence between services. Each service can be considered as a stage, and the message passed between all the stages in a sequence. In some cases where the message is lost or cannot be processed (for any reason), the dead-letter queue mechanism takes the message and process it in a pre-defined way.
Job/Task Distributed Queue
The A-synchronic pattern is used by some producers to many consumers when we need to distribute tasks/jobs between workers. In this use case, there is no need to coordinate between tasks. Each service is pulling messages from the queue to process it. The sequence of processing the messages is not important as each worker takes the task from the queue and process it in his own time.
The A-synchronic pattern is used when we need to stream data from many external sources (big data, IoT) and process it in a dedicated service such as database, pipeline, machine learning, storage, etc. This case typically aggregates many producers to a small number of consumers. The order of messages is not important, but the delivery guarantee is important.
The A-synchronic pattern is suitable for cases of a small number of producers sending real-time data to many consumers. A service acting as a publisher sends a message to a channel where a set of services subscribe to that channel and get the messages.
Connectivity between services such as edge, API, databases and storage act like a router, and message order is not important; timeout and sync between producers and consumers are important.
Ease of Use
One key feature of the microservices architecture is its ease of use. Deploying it saves your organization time and money by unifying development and operations workflows. The message queue system needs to be easy to use as well. Its ease of use and DevOps’ friendliness should minimize the need for dedicated experts and, at the same time, accelerate development and production cycles. While considering this, it’s important to ensure support in high volume messaging with low latency, efficient memory usage and fundamental messaging patterns (real-time pub/sub, request/reply and queue) are not compromised.
Gradual Migration to Kubernetes
When IT professionals evaluate the significant advantages of using the Kubernetes environment, they cannot ignore the operational requirements necessary to maintain the revenue-generating monolith infrastructure. Legacy systems also contain the processes and data related to the on-going business and are crucial for the service contingency.
To support a gradual migration that keeps the business operation, the Kubernetes message queue must enable connectivity between the old and new environments. A bridge is required to connect the legacy side to IBM MQ, TIBCO, MSMQ or Kafka (just to mention a few popular enterprise solutions), to the Kubernetes message queue that allows a seamless bi-directional transposing messages between the services in the legacy monolith environment and the microservices-based deployment in the Kubernetes environment.
The bridge installed in the legacy environment is “listening” to the MQ on behalf of the relevant services in the Kubernetes environment, ensuring the designated messages are transferred from the monolith to Kubernetes and vice versa. This bridging capability enables gradual migration by implementing a step-by-step replacement of components from the old environment or the creation of new services that can still connect with the legacy resources.
To learn more about containerized infrastructure and cloud native technologies, consider coming to KubeCon + CloudNativeCon NA, November 18-21 in San Diego.