Blogs

What Biden’s Cybersecurity EO Means for DevOps Teams

On May 12, 2021 President Biden issued Executive Order 14028, also known as the Executive Order on Improving the Nation’s Cybersecurity. This EO covers a lot of ground, and like all executive orders, it instructs agencies of the U.S. Federal Government to perform specific actions. What it doesn’t do is appropriate funding or create industry mandates. That’ll come later, once some of the plenary stages of the EO are complete.

Despite this, I’m confident that EO 14028 could be as impactful to cybersecurity as GDPR was to data privacy. I’m also confident that the EO has the attention of business leaders who are trying to figure out what it means to them and their teams—DevOps professionals. While the EO has no underlying mandate to private businesses or those located outside the U.S., it does outline a reasonable set of cybersecurity best practices that can be reasonably interpreted as a call to action for greater transparency.

In this article I’m going to use the terms supplier and buyer, but I don’t mean to imply that money needs to change hands; only that there are producers of software (suppliers) and consumers of software (buyers). If there is one supplier and one buyer, the software supply chain has one link. If that supplier is also a buyer, the chain has two links. For most software, there are many suppliers; in these cases, the supply chain isn’t so much a length of links but more of a mesh of interconnected links.

The 2021 Synopsys OSSRA report showed that the average audited application had over 500 dependencies, so that mesh is quite complex—to the point where the security practices used anywhere in the mesh are often obscure, at best. It’s this obscurity that attackers can exploit, which is why one of the key tenants of the EO is transparency.

As anyone charged with operating a piece of software knows, the security testing performed by a supplier isn’t really obvious, but, as a buyer, you assume the supplier has done the right things when it comes to security. But how do you know? Do you know where the code originated from? Do you know what security targets were used by the supplier? If an attacker is actively compromising the application, how would you know?

These are all very good questions that are reflected in the EO, to varying degrees. Those questions also belie the reality that suppliers test software. Suppliers do, of course, test their software, but what testing techniques did they use and how effective are/were their tools? And, more importantly, how can you confirm that the mitigations and fixes applied to address identified defects were effective for your deployment scenario? After all, it’s not like every supplier is going to provide you with full access to their source code.

This is where things become quite actionable for DevOps teams that have embraced the principles of operational feedback mechanisms. For those needing a bit of a refresher, that’s the part of the DevOps mobius where lessons learned operating software are fed back into the next iteration of the development flow. Let’s look at two pieces of operational feedback that are mentioned in the EO: least privilege and log management.

Log management features prominently in the EO in part because logs enable an understanding of how an application is operating. If a supplier provides detailed information about the log structure and flows, it then becomes possible to build alerts in log aggregation services and SIEMs as indications of unexpected operation. That’s the proactive model the EO embraces—as opposed to forensic postmortem log analysis following a breach—but this requires the supplier to provide insights into the log data indicating that the system became compromised. Of course, to provide that type of data, the supplier must have tested their system software in a manner that allows such a log template to be created.

We’ve heard that software should be developed using principles of “least privilege” for years, but if you’ve ever been asked to run an application as root or had a supplier not detail the precise network ports required by the application, you know there is still work to be done there. The EO specifically calls out cybersecurity techniques like multifactor authentication and zero-trust network access as ways to achieve granular control over how applications operate.

It is, however, the coverage of Software Bill of Materials (SBOM) that could have the greatest impact for DevOps teams. In effect, the EO stated that suppliers need to tell buyers the origin of all code within an application. While the EO doesn’t specify the elements of an appropriate SBOM, it does instruct NIST to publish the minimum elements of an acceptable SBOM within 45 days of the EO. When combined with the sections on how build environments are created and managed and the security transparency elements elsewhere in the EO, it’s clear that the EO intends to raise greater awareness of how software is created, patched and maintained.

In the end, Executive Order 14028 sets a high bar for cybersecurity in general—one where everyone involved in the creation and operation of software will need to up their game. I’ve said for years that it’s impossible to patch something you don’t know you’re running, and that remains true. What must be added is that you also can’t reliably secure something when you don’t know how well each component was tested. The EO is prompting us all to take a step toward solving that problem.

Tim Mackey

Tim Mackey is a principal security strategist within the Synopsys CyRC (Cybersecurity Research Center). He joined Synopsys as part of the Black Duck Software acquisition where he worked to bring integrated security scanning technology to Red Hat OpenShift and the Kubernetes container orchestration platforms. As a security strategist, Tim applies his skills in distributed systems engineering, mission critical engineering, performance monitoring, large-scale data center operations, and global data privacy regulations to customer problems. He takes the lessons learned from those activities and delivers talks globally at well-known events such as RSA, Black Hat, Open Source Summit, KubeCon, OSCON, DevSecCon, DevOpsCon, Red Hat Summit, and Interop. Tim is also an O'Reilly Media published author and has been covered in publications around the globe including USA Today, Fortune, NBC News, CNN, Forbes, Dark Reading, TEISS, InfoSecurity Magazine, and The Straits Times.

Recent Posts

Building an Open Source Observability Platform

By investing in open source frameworks and LGTM tools, SRE teams can effectively monitor their apps and gain insights into…

24 hours ago

To Devin or Not to Devin?

Cognition Labs' Devin is creating a lot of buzz in the industry, but John Willis urges organizations to proceed with…

1 day ago

Survey Surfaces Substantial Platform Engineering Gains

While most app developers work for organizations that have platform teams, there isn't much consistency regarding where that team reports.

2 days ago

EP 43: DevOps Building Blocks Part 6 – Day 2 DevOps, Operations and SRE

Day Two DevOps is a phase in the SDLC that focuses on enhancing, optimizing and continuously improving the software development…

2 days ago

Survey Surfaces Lack of Significant Observability Progress

A global survey of 500 IT professionals suggests organizations are not making a lot of progress in their ability to…

2 days ago

EP 42: DevOps Building Blocks Part 5: Flow, Bottlenecks and Continuous Improvement

In part five of this series, hosts Alan Shimel and Mitch Ashley are joined by Bryan Cole (Tricentis), Ixchel Ruiz…

2 days ago