DevOps is challenging, some of its complexity requires a platform that is both SaaS and managed environments, running on top of several cloud providers and potentially on-premises as well. Is there a platform that connects applications and systems while supporting enterprises as they accelerate their digital transformation efforts? If done the traditional way, you have all the ingredients to fall into an operations nightmare. Unless you tackle the challenges with cloud-native approach from day one.
Let’s explore three ways to tackle the operations complexity of DevOps.
Why use Kubernetes? This technology is an open source system for automating deployment, dealing with massive scale and management of containerized applications. If the main initiative is to isolate every single integration that runs, then select an architecture component that could give you a head start.
A well-designed application based on microservices and running on containers can benefit heavily from technology such as Kubernetes in the following ways:
- Resilience & Fault Tolerance: These are key ingredients for today’s highly available, always-on systems. With the help of Liveness and Readiness probes, we can set up an environment where any misbehaving container could be automatically recycled and have a new instance ready to use.
- Multizone Cluster: With a multizone cluster, it is possible to apply hard-to-achieve fault-tolerant architectures where a container running on a zone is automatically moved to another zone if there is a failure. By coupling that with GCP’s ability to replicate volumes across zones and you can deliver a true fault-tolerant architecture.
- Millisecond recovery: Any restarted container takes milliseconds to get ready again, so absolutely no interruption occurs.
- Microservice Isolation: Misbehaving containers won’t affect others as there are reinforced limits in terms of CPU and Memory usage.
Deployment as Code
GitOps is our mantra. GitOps is a way of implementing continuous deployment for cloud-native applications. It focuses on a developer-centric experience when operating infrastructure, by using tools developers are already familiar with, including Git and continuous deployment tools.
The technique here is to follow regular Git Flow methodology and have automated “operators” apply changes to a Kubernetes cluster. These operators are regular containers running on a Kubernetes cluster, which constantly monitor Git source repositories for changes. Once a change is detected, the operators automatically trigger an update.
For the automated installations, use Helm as the deployment engine. When changes are committed to the Helm configuration in Git, the operators automatically apply those changes.
This approach essentially reverts the classical imperative deployment model where actions are performed in the environment to a declarative approach where the state of the environment is defined by a set of rules and kept in sync by these operators.
Outstanding levels of governance can also be achieved in all clusters from a single source of truth. If a manual change was introduced in any of the environments, the operator would detect and revert back to the defined state.
Normalize Hybrid Environments
Kubernetes is powerful, but comes at a price. It is very challenging to maintain different versions and distributions, especially when using different cloud providers and on-premises environments. Due to that fact, you must identify a technology that would abstract the complexity away, normalizing the operation.
Consider Google Anthos for this, which is a full Kubernetes distribution based on Google Kubernetes Engine and is available to Google Cloud Platform customers. Over the years, Google refined its Kubernetes distribution and eventually released it as a product in its cloud platform.
Main advantages of using a technology such as Anthos include:
- Standard Kubernetes installation for on-premises, VMware-based environments.
- Connectors for Kubernetes Engines in AWS and Azure cloud providers.
- Built-in configuration management capabilities allowing us to declare our clusters and keep them in-sync.
- Remote-controlled Kubernetes environments from a single admin portal so all cluster resources are easily accessible.
- Monitoring metrics consolidated on Google’s StackDriver.
The three ways are more than enough to achieve greater scalability, governance and control of hybrid environments to address DevOps and complexity. They are actually a new mindset for application design and operations. I truly believe that they can help companies deliver better software while recognizing the complexity and challenges of dealing with enterprise applications.