DevSecOps

Automation: The Human-Free Zone Approach to Software

Humans hate repetitive manual tasks. We suck at them—we make mistakes, lose concentration, get bored. Throughout history, humans have instinctually developed automation to keep us away from the hazard-prone, tedious work of these tasks.

In software, we use configuration management, deployment tools and lean/agile processes born out of factories and the automation principles established there to automate the software factory. Utilizing automation reduces software errors and increases reliability and productivity by allowing the work performed to scale to hundreds and even thousands of systems.

As we’ve written scripts and functions to help ourselves, we’ve had to implement access control, least privilege, two-person rule and other security principles to secure against the human threat and to prevent catastrophic loss through either a malicious actor, leaked credentials or other type of compromise. And this problem gets more complex and difficult the more automation is involved. However, what if we harness the power of automation to solve our security risks—to remove the human threat mostly or entirely from production by enforcing a single path to production via CI/CD pipelines, building and deploying signed artifacts and system hardening including explicit removal of human/remote access?

With this, we can validate the chain of custody on all changes to the system while also proving that outside actors cannot make unauthorized changes. This is what we’re looking for when we talk about DevSecOps.

Automation to the Rescue

Here are the processes humans should be fully or at least partially removed from to optimize software effectiveness, security and efficiency:

  • Code and pipeline.
  • Artifact deployment.
  • Automated access.

Code and Pipeline (partial human interaction)

Let’s start on the left, with our code and pipeline. The rules here are simple and familiar. We start with source code management and the two-person rule. No one individual can write and commit code without approval from someone else. Implementing the two-person rule as well as securing production access ensures that access control happens solely within the pipeline and we have full enforcement of the two-person rule. We can have a full team review along with static code analysis, but the core rule is that no one person has the power to write and commit code, ensuring that a single human cannot be responsible for making a change.

The code repository is watched by our pipeline with the task of testing, building and verifying a versioned artifact. This artifact is everything in our code at a moment in time. When we think about rollback scenarios, we’re thinking about deploying the previous artifact. This artifact is checksummed so we can attest to its integrity at any point, and we can validate the checksum by pulling the code and building it ourselves.

Artifact Deployment

And we need an artifact for not just our application code from development, but also based on what the infrastructure looks like from Operations. This is part of the promise of DevSecOps: Ops writes code that gets checked into repositories just like Development. This allows us to pair our application artifact with our operations artifact. These artifacts together are what we deploy—the system at a moment in time and the application at a moment in time. So, at this point, we’ve established that two humans (at least) have looked at the source code and, once committed, that our pipeline has picked up those changes, performed static code analysis on it and created signed artifacts, ready to be deployed to a test environment. Now that we have our artifacts, we need to deploy them.

Again, the goal is to not use humans, so after our artifacts are built we must test deployment. The way we ensure security is having the pipeline push the artifacts to a secured artifact store and our systems pull from there as we approve environments to receive the update. This accomplishes two important things: First, by having a push from our pipeline, we again maintain that chain of custody tracing back to source control. Second, we limit which environments are updated by having those systems subscribe to the appropriate artifact stores. The only work humans do here is performing testing and deciding when to promote. The automation actually takes care of the actual promotion work. Now finally, we have the ability to run our artifacts in production.

Automated Access

The idea of a continuous delivery pipeline isn’t new and is practiced by many teams today. What we’re going to add to this is a production system that does not have inbound access.

Because in this design systems pull from the artifact store, the only rule needed is outbound from production to the artifact store. This means you can disable remote management entirely, trusting fully in the chain of custody established in the pipeline. This trust, along with ensuring remote management is disabled, establishes the human-free pipeline. We have shifted user access focus into source code repositories and into the management of the CI/CD pipeline. We have visibility into every stage of the process and eliminated the need for users to have login credentials to production systems. The credential management is also a function of pipeline management, drastically reducing its complexity.

This is what it means to practice DevSecOps—understanding how automation actually enables the security team to reduce the complexity of our security model and gets security out of the role of blocker and into enabling the business.

Galen Emery

Galen Emery is a CISSP Certified Security Professional focusing on Audit, System Hardening and Security's role in DevSecOps pipelines and teams. He has extensive background in building hardened systems and implementing efficient & secure development methodologies to ensure delivery of a continuous and evolving security posture. At Chef, Galen works as the Lead Compliance and Security Architect working with the US Government, Dept of Defense, Fortune 500 and associated organizations achieve secure and compliant infrastructure by design by following DevSecOps practices.

Recent Posts

Building an Open Source Observability Platform

By investing in open source frameworks and LGTM tools, SRE teams can effectively monitor their apps and gain insights into…

21 hours ago

To Devin or Not to Devin?

Cognition Labs' Devin is creating a lot of buzz in the industry, but John Willis urges organizations to proceed with…

22 hours ago

Survey Surfaces Substantial Platform Engineering Gains

While most app developers work for organizations that have platform teams, there isn't much consistency regarding where that team reports.

2 days ago

EP 43: DevOps Building Blocks Part 6 – Day 2 DevOps, Operations and SRE

Day Two DevOps is a phase in the SDLC that focuses on enhancing, optimizing and continuously improving the software development…

2 days ago

Survey Surfaces Lack of Significant Observability Progress

A global survey of 500 IT professionals suggests organizations are not making a lot of progress in their ability to…

2 days ago

EP 42: DevOps Building Blocks Part 5: Flow, Bottlenecks and Continuous Improvement

In part five of this series, hosts Alan Shimel and Mitch Ashley are joined by Bryan Cole (Tricentis), Ixchel Ruiz…

2 days ago