Blogs

Best of 2021 – The Next Evolutions of DevOps

As we close out 2021, we at DevOps.com wanted to highlight the most popular articles of the year. Following is the seventh in our series of the Best of 2021.

As society continues to recover from the global COVID-19 pandemic and some semblance of normal starts to return, technology continues to move forward. The pandemic certainly changed what normal means and people, processes and technologies all have adapted and evolved to meet those challenges. The narrative that the pandemic caused of your organization’s digital transformation is true for many practitioners. With 2022 just around the corner, DevOps is on the cusp of maturation of previously bleeding-edge paradigms and a focus on engineering efficiency. Here are five DevOps evolutions you should keep an eye on going forward.

Engineering Efficiency is Front and Center

Engineering efficiency is a particularly broad umbrella term. At the foundational level, engineering efficiency focuses on making someone (for example, an engineer) more productive. There are a multitude of disciplines that intersect, from organizational design to engineering/developer experience. Engineering efficiency is increasingly critical for organizations today. The pandemic had two major impacts on the technology sector in terms of resources and staffing. The first, at the start of the pandemic, was the unprecedented and unpredictable availability of resources; physical locations were being shut down because of the medical severity of the pandemic and the “getting hit by a bus” factor was playing out in real life. Second, during the recovery period, the Great Resignation began, and with it, a fight for resources that has become increasingly intense. Both have meant that the availability of resources, especially ramped resources, is scarce. 

The Spotify model of tribes/guilds of engineering resources is a promising evolution; folks can frequently move around, reducing the toil that engineering resources face and allowing for engineering resources to not only ramp up more quickly but also increasing engagement and retention. The industry continues to march toward standardization in areas that were typically bespoke, such as the release process, and I think we’ll see that continue to evolve, as well.

Git is Ubiquitous

Leveraging a source code management (SCM) solution used to be reserved for software engineers. But as the proliferation of “something-as-code” has touched multiple levels of the technology stack from networking, storage and, ultimately, a development pipeline, the march of operations-focused engineers adopting many of the traits of software engineers continues. Having iteration in your infrastructure stack with more ephemeral/disposable infrastructure is becoming the norm. Saving and packaging the multitude of configurations for the “something-as-code” source control is a natural area of evolution. 

Building off of source control are the package managers—your Docker registries, Helm chart repositories, etc.—for more deployable artifacts; that said, getting those artifacts to production is a process. Because Git is becoming ubiquitous across the technology landscape, this leads to the continued adoption of GitOps. Since the seminal piece was written by Weaveworks in 2017 defining the GitOps paradigm, leveraging GitOps is seen as a viable paradigm for many organizations five years later, especially when starting with greenfield initiatives. 

Kubernetes is No Longer Bleeding Edge

In 2022, Kubernetes will celebrate the eighth birthday of its first commits on GitHub. Taking a jog down technology memory lane, eight years before Kubernetes (in 2006), VMware was still a private company and it would be a few years before vSphere was even released (remember how it felt to run a workload on a virtual machine in 2014?). The Kubernetes ecosystem still moves quickly, but placing workloads on Kubernetes is not a novel concept anymore. As application infrastructure and architecture have adopted the Kubernetes way (in other words, being idempotent and ephemeral), there is maturity around running a suitable workload on Kubernetes.  

Kubernetes by design is highly pluggable. If you do not like the opinion or implementation of something inside Kubernetes, you can replace that opinion. Don’t like how Kubernetes handles ingress traffic? Then choose from one of the many ingress controllers available today. That is just one example of dozens of pluggable areas. Because of this, Kubernetes is viewed as a dynamic, non-static resource. Using Kubernetes takes trial and error and getting your cluster to be robust, performant and reliable is an ongoing journey that requires ongoing iteration. 

Authors Can Be the Enforcers

There is no question that in a modern software stack, we are inundated with data. The metrics soup and the continued science of and exploration of observability (metrics, traces, logs) provide us with a lot to work with. Making decisions, even automated ones, based on that data continues to be critical. With the rise of site reliability engineering (SRE) practices, reacting or anticipating spikes in usage, failures and even security-related items are par for the course for modern teams. 

These decisions can be authored into policies that specific platforms understand, such as autoscaling rules for a cloud vendor or something like Open Policy Agent in the Kubernetes ecosystem. With so many items shifting left toward the development team, finding the right resource, skillset or institutional knowledge to author these rules can involve the input of several teams. Because of the rise of “something-as-code,” regardless of whether the author of these rules is a software engineer, DevOps engineer, platform engineer or anyone in between, that author has the ability to enforce. 

Shifting Left … But Complexity Shifts Left, Too

The old adage is that complexity is like an abacus: You can shift complexity around, but it never really goes away. With the movement to shift responsibility left to development teams, this also means that associated complexity is shifting to the development teams. Modern platform engineering teams provide the infrastructure (compliant Kubernetes clusters) to teams and any workload that is run on those clusters is up to the development team that owns it.  Typically, development teams then focus on features and functionality. Managing lots of non-functional requirements—and even core infrastructure requirements such as networking—can be a burden; think about how your organization would handle a service mesh.

If you are a DevOps or platform engineer, making your internal customers—your development teams—successful is a great goal to work toward. Crucial to this is disseminating expertise. This can be in the form of automation and education. A common practice with the DevSecOps movement is to have some sort of scanning step as part of the build or deployment process, disseminating the internals as far as how the scan is performed, what happens if something is found, etc. Gaining internal adoption is a journey and having a good developer experience built around clarity and stability are important. 

Evolution in 2022

Even with all of the challenges, bumps and learnings that the last year has given us, technology continues to evolve and advance and even lower the barriers of entry to continually become more inclusive. Focusing on improving engineering efficiency and reducing toil will allow for more participation and lower the barriers to entry.

 

Ravi Lachhman

Ravi Lachhman is the Field CTO at Shipa, a cloud native application-as-code platform. Prior to Shipa, Ravi was an Evangelism Leader / Chief Architect at Harness. Ravi has held various sales and engineering roles at AppDynamics, Mesosphere, Red Hat, and IBM helping commercial and federal clients build the next generation of distributed systems. Ravi is obsessed with Korean BBQ and will travel for food.

Recent Posts

Valkey is Rapidly Overtaking Redis

Redis is taking it in the chops, as both maintainers and customers move to the Valkey Redis fork.

11 hours ago

GitLab Adds AI Chat Interface to Increase DevOps Productivity

GitLab Duo Chat is a natural language interface which helps generate code, create tests and access code summarizations.

16 hours ago

The Role of AI in Securing Software and Data Supply Chains

Expect attacks on the open source software supply chain to accelerate, with attackers automating attacks in common open source software…

21 hours ago

Exploring Low/No-Code Platforms, GenAI, Copilots and Code Generators

The emergence of low/no-code platforms is challenging traditional notions of coding expertise. Gone are the days when coding was an…

2 days ago

Datadog DevSecOps Report Shines Spotlight on Java Security Issues

Datadog today published a State of DevSecOps report that finds 90% of Java services running in a production environment are…

2 days ago

OpenSSF warns of Open Source Social Engineering Threats

Linux dodged a bullet. If the XZ exploit had gone undiscovered for only a few more weeks, millions of Linux…

3 days ago