If done right, DevSecOps eliminates the cultural roadblocks that often prevent organizations from getting IT security proactively involved with development and operations. Barriers are the last thing any organization should contend with when comprehensive security is an absolute priority at a time when data breaches, intellectual property theft and other cybercrimes can almost immediately cripple a business.
Still, companies are far from perfect. Misunderstandings, ill-defined policies and turf wars can prevent development and operations from collaborating in a way that naturally bakes-in security into their work. To effectively transform a DevOps pipeline into a DevSecOps one, organizations need to change the way people work and collaborate; make processes inclusive of quality, security and compliance activities; and integrate the right tools.
Those three steps are the focus of this two-part series on becoming a true DevSecOps enterprise. We reviewed processes in part one and now we’ll cover people and tools in this article.
For enterprises with deep-rooted DevOps, a renewed focus on security might entail modernizing the release pipeline and involving IT security people even more. For enterprises that are modernizing heritage-monolithic application portfolios and moving to cloud native architectures, pipeline modernization should include a DevSecOps approach that considers containers, microservices and APIs.
DevSecOps done right gives organizations a level of visibility and accountability that helps accelerate coding, testing, production deployments, problem determination and defect resolution–all accomplished because of an organization-wide acceptance and commitment to security that should shatter any and all roadblocks.
Don’t Forget About the Humans in DevSecOps
When was the last time you heard a developer say, “Hey, let’s get the security guys in here, to hear what they have to say?” Reality is far harsher. Collaboration with IT security often triggers visceral reactions that are not exactly positive. IT security is often stereotyped as the organization of “no” and “you can’t do that.”
DevSecOps aims to tear down those cultural barriers. DevSecOps is all about embracing your IT sec organization and even inviting auditors to collaborate early on in the release process.
Ultimately, security is everyone’s responsibility. Being a developer, there was nothing like the mention of security to make me feel super uncomfortable. I don’t believe that developers set out to purposely create insecure code. However, when IT sec is not involved in a collaborative, cooperative relationship with dev and ops, there is an uncertainty that creeps in, and trust and confidence are shattered.
We know developers develop and release code, while operators control and manage the runtime environment. What is the expectation of IT security in this process? What are they looking for? What risks do they need to mitigate when a digital product is delivered? Understanding the objectives of IT security in the delivery process will clarify the steps that must take place in the process and reveal which tools should be used to create a product that’s secure and delivered with velocity and quality.
Understanding what stakeholders do along the pipeline reconciles personal and organizational accountability with tangible value-add in the release process. DevSecOps can be a catalyst to re-examine and provide clarity on who is doing what, when it is being done and why the effort is being exerted.
Clarity might reveal that personas need to be updated to more modern roles. For example, an operations system administrator might become more of a site reliability engineer (SRE). The SRE role involves more proactive activities, earlier in the release pipeline, to infuse greater fault tolerance in code and greater resilience in the environment configurations–and to solve problems by engineering solutions with developers and IT security.
Determine who does what and where. Develop personas that outline responsibilities along the pipeline, from the sourcing of code to production, to create necessary transparency. Value-stream mapping is a good way to capture the current state of a release pipeline and an excellent way to construct a desired state pipeline with the necessary people performing value-add activities to releasing a product.
Use Tools that Provide Necessary Visibility
Tools are the enablers. These are the cogs in the machine that will be running test cases, static analysis, vulnerability scans and integration tests. Tools will automate DevSecOps processes and provide the most critical function: visibility. The ability to observe all DevOps processes and determine their origins in the pipeline is critical. Visibility lessens risk and supports compliance and audits. The process that defines how code is governed through the promotion path should drive the tools that will be integrated into the release pipeline. Most tools serve to automate build, testing, scanning, movement through the promotion path and integration with other tools. It only makes sense that end-to-end visibility in the release pipeline should be engineered in.
Tools, though, must integrate and communicate to achieve end-to-end visibility. In the heritage era of DevOps (basically, the time before cloud native), it was tough to gain visibility because tools came from different vendors and often couldn’t talk with one another. Fortunately, cloud native platforms offer more integrated capabilities. Build and CI tools naturally share information because these capabilities are being refactored into the cloud platforms themselves.
An example of this is the cloud native CI/CD capabilities of Tekton, which leverage and extend the operator framework in Kubernetes. Translation: CI/CD tools are starting to look like extensions of the cloud native platform, in stark contrast to heritage CI/CD tools which were standalone third-party software products. The result of this tool trend is tighter integration and better visibility built into the platform–you don’t have to engineer it.
Companies need to review how effectively their tools communicate and close any visibility gaps. When possible, consider some of the next-gen automation tools and technologies such as Tekton, Kabanero.io and Jenkins-x that bring greater visibility, integration and consideration for cloud native apps which heritage tools have struggled with.
DevSecOps Is a Catalyst for Pipeline Modernization
A modern security framework will let companies take full advantage of the cloud native technologies that are changing DevOps. Granted, most companies won’t completely shift their entire infrastructure to the cloud, nor should they. Most organizations need to identify a partner to help them assess which processes and tools are worth moving to the cloud, and which ones–like a mainframe that runs core business functions–should be left alone.
For those enterprises that have a large heritage application portfolio, DevSecOps could mean retrofitting automation into systems and applications that were often designed and developed without the scaffolding of a secure DevOps pipeline. DevSecOps applies to heritage application, too. After all, what pipeline wouldn’t benefit from the visibility and traceability of tracking a user story to its production release? By automating the delivery of these heritage apps, you’re optimizing the effort of manual activities such as build, test and deploy, and retrofitting the capability of pipeline visibility.
Cloud has changed the DevOps game, newly defining the fundamental technical underpinnings of what an application really is. And it is cloud that’s driving companies to look at the security of DevOps in an entirely new way. With much on the line–the purity of products, compliance, brand reputation–companies need to rebuild their security frameworks so they can take full advantage of cloud and their on-premises programs. They don’t have to do DevSecOps alone.