The more complex software engineering becomes, the more apparent it is becoming that the communications processes relied on to manage DevOps workflows are fundamentally flawed. Every DevOps team has experienced multiple instances where, for example, a software package needed to build or deploy an application isn’t where it’s supposed to be when required. The outcome is additional delays that, in hindsight, were completely avoidable.
Fortunately, organizations are starting to realize that the tools used to build and deploy software create metadata that can be tracked and also harnessed to optimize workflows. For example, TestifySec’s Witness, an open source tool to generate and verify attestations that provide a verifiable record of the steps taken to build software, including the materials used and commands run during a DevOps workflow. A companion open source Archivista project manages storage, retrieval and retention of software build pipeline attestations and trusted telemetry observed by Witness.
“It’s really about being able to prove the provenance of your artifacts,” said TestifySec CEO Cole Kennedy. “It’s about communication.”
These and other tools are at the forefront of a broader effort to harness the metadata software build tools already generate in a way that not only helps ensure tasks have been completed but, ultimately, serves to make the software supply chain more secure.
“To effectively improve software supply chain security, it’s important to be intentional about observing DevOps workflows rather than hope to reconstruct data from workflows after the fact,” said Mitch Ashley, principal analyst for Techstrong Research.
In fact, as regulations requiring organizations to manage software supply chains continue to become more stringent, it’s now only a matter of time before compliance teams require software engineering teams to be able to document every interaction that occurred across the entire software development life cycle (SDLC).
The DevOps Documentation Challenge
DevOps teams today routinely employ multiple tools to create software artifacts, manage software engineering pipelines and, ultimately, deploy software. The level of DevOps maturity will vary widely from one organization to another, but they all have one thing in common: Somewhere, a project manager is trying to keep track of the interactions between those tools to ensure application delivery schedules are met. The challenge is, historically, those project managers have been forced to rely on manual data entry to capture those interactions in a way that can be used to verify a task was completed. In an ideal world, the tools themselves would automatically share the data required with an application that makes it easy to determine which tasks have actually been completed.
In the absence of that level of automation, it should come as no surprise that projects are delayed simply because one critical task or another was not completed in time for the next phase of the software development process was unable to begin on time. Inevitably, for want a proverbial nail, the battle is lost as delays that could have been avoided have a cascading impact throughout the software development life cycle.
Adding further insult to injury, no one effectively learned from that experience because the documentation that describes what occurred is either sparse or simply nonexistent. The truth is, software engineering teams are often at a loss when it comes time to explain to business and IT leaders what went wrong because there is no way to determine beyond a shadow of a doubt what occurred and precisely when during the software development process.
The Compliance Challenge
This lack of transparency into software development workflows has been an open secret for longer than most IT leaders care to admit. However, as software supply chain security concerns continue to mount, more organizations are being required to document the software engineering workflows that were relied on to build their applications. In the event of an audit, any inability to provide that documentation will only increase the size of the fines that might be levied.
The more complex the applications being deployed become, the more probable it becomes a critical event upon which an audit revolves will not have been captured by manual processes that, by definition, are prone to human error.
Unfortunately, as most IT leaders already know well, when it comes to compliance, there’s not a lot of forgiveness for human frailty.
Automation
As is almost always the case when it comes to DevOps workflows, the best path forward is to almost always rely on increased automation. One of the primary reasons most of these workflows are not well documented is that humans, in general, and application developers, in particular, are not overly fond of data entry. No one wants to spend time keying in data they know already resides in another application. In addition to wasting time, there’s also a high probability mistakes will be made as data is manually copied from one application to another.
Validating events that occur is how organizations manage risk. Otherwise, business and IT leaders are hoping that all the tasks that need to be completed to enable an application to be delivered on time are actually done. Hope is not a substitute for a platform that automatically captures, analyzes and validates each step that occurred to determine actual intent versus simply presenting a series of logs that don’t provide nearly enough meaningful context.
Summary
Slowly but surely, the processes organizations rely on to manage software are becoming more advanced. Capabilities that a few short years ago would have seemed far too difficult to provide are now readily accessible. Capturing metadata in a way that enables organizations to automate workflows at unprecedented scale is increasingly becoming commonplace.
The challenge and opportunity now is finding the simplest means possible to harness it at a time when application environments are becoming too complex to manage otherwise.