These days, securing the software development lifecycle from end to end requires organizations to deploy, maintain, and master a cacophonous mix of tools that often do more to create discord rather than bring harmony to DevSecOps processes. The reason for this is simple: Each tool employed to secure a software supply chain independently runs scans and generates alerts that lack context, are often redundant, or flat-out contradict one another.
Developers and the software engineers that support these applications are, of course, expected to correlate the steady streams of alerts and address vulnerabilities to meet security thresholds required to propagate the release to an upstream environment. The reality is that not all of the vulnerabilities flagged as high and critical need fixing, nor is it possible for a team to address all of the vulnerabilities. On average, a development team has the bandwidth to address 10% of its vulnerability backlog in any given month. This makes it imperative to prioritize the vulnerability backlog based on impact rather than severity if we are to give these development teams a ghost of a chance to succeed in improving their security posture.
The constant alert storms also lead to fatigue and introduce a real risk of false negatives. The one that got away!
DevSecOps workflows today are little more than sections of an orchestra playing without a score with the expectation that some methodology to consistently lock down a software supply chain will magically manifest.
What’s obviously missing is an orchestration framework that triggers the security workflow in response to changes within the SDLC. Ideally, it works seamlessly with the CI/CD workflows, but without being dependent on them, creating a symphony capable of providing a better security posture for the organization.
The Trouble With DevSecOps
In cloud-native, microservices-based application development environments, teams could be developing using multiple technologies, each optimized for delivering a certain service. Each technology may require a certain specialized scanning tool—and that’s just code; by the time we add binary, infrastructure pipeline, data and identity to the mix, you end up with a vast array of tooling and a huge set of policies that every change needs to be assessed against.
Yes, you can scan containers on a periodic basis before deploying to production and that might be OK if you are deploying one release per month. However, even then you could argue that by the time the vulnerabilities are picked up, it’s too late and you have already lost the battle on efficiency.
The very reason you went with a microservice-based architecture is so that it’s easy to scale and quick to build and deploy. Ideally, you would want to be releasing features rapidly for your customers. This means a very high degree of change, which in turn means the point-in-time assessment you did will simply not be fit for purpose in terms of protecting your assets. Every commit has the potential of exposing a new risk, which may end up being discovered too late if left to point-in-time assessments.
Instead, if security is orchestrated seamlessly and asynchronously through tool-agnostic interfaces across your CI/CD pipelines, delivering the outcomes required to secure your digital assets from the first commit right through to the production deployments and beyond, you have truly adopted DevOps. In this scenario, every change is assessed in real-time, and its impact is projected across to development, operations, and security with a clear call to action.
This approach also provides a more extensible and scalable framework for organizations as they advance through their journey of DevSecOps adoption.
Summary
No development team deliberately sets out to build and deploy an insecure application. The reason applications with known vulnerabilities are deployed so often is because the cognitive load associated with discovering and remediating them is simply too high. The average developer can only allocate 10% to 20% of their time remediating vulnerabilities. The rest of their time is spent either writing new code or maintaining the application development environment used to write that code. If organizations want more secure applications, they need to find ways to make it easy for developers to correlate, prioritize and contextualize the vulnerabilities as they are being identified. Most of the time when developers are informed a vulnerability has been discovered in their code, they have long since lost context.
Vulnerabilities need to be immediately identified at the time code is written, builds are created, and pull requests are made – and identified in a way that is actionable. Otherwise, that vulnerability is likely to be thrown atop the massive pile of technical debt that developers hope they’ll one day have the time to address.
At this juncture, it’s only a matter of time before governments around the world enact legislation that will hold organizations more accountable for the security of the software they build and deploy. Savvy DevSecOps teams already recognize that existing approaches to managing DevSecOps workflows will need to be revamped to address these requirements. The challenge and the opportunity now is to put in place a scalable security orchestration framework that eliminates friction in DevSecOps workflows and drives efficiency through real time assessments, ensuring that your software is secure by default.