Blogs

Welcome to the New Field of Software Supply Chain Management

Supply chain management is the newest ‘shiny object’ in both the DevOps and DevSecOps communities. But what does it mean in relation to software development?

Historically, supply chain management is a commerce term that refers to tracking the logistics of goods and services moving between producer and consumer. This would include the storage and flow of raw materials with a focus on the channels that support the process.

When we look at this concept from the perspective of software development, we will soon see that it is a new form of software configuration management. Yes, the new SCM is … SCM. The core of supply chain management is the process of controlling changes in software. The new SCM expands those old practices to include all ‘raw materials’ which we manage as part of our software build and package steps. For instance, when companies purchase and download open source libraries used in their in-house software. Supply chain management includes the interrogation of these ‘raw materials’ as well as all of the internal code and libraries. We will soon be hearing more about software bill of materials (SBOM) reports and difference reports, as they are critical for understanding the objects that flow through the software we create.

The software supply chain or software configuration management practice hasn’t been talked about much until recently. This area of expertise has never been ‘sexy.’ That is, until the SolarWinds supply chain hack. The SolarWinds hack was a hard lesson to learn, but it taught us to take a closer look at how we consume our raw materials and to be more serious about how we build and package our software.

According to FireEye, the firm that discovered the backdoor supply chain hack:

“SolarWinds.Orion.Core.BusinessLayer.dll is a SolarWinds digitally-signed component of the Orion software framework that contains a backdoor that communicates via HTTP to third-party servers. We are tracking the Trojanized version of this SolarWinds Orion plug-in as SUNBURST.”

It was obvious, something went very wrong with the creation of that .dll. I don’t pretend to fully understand how this breach occurred. But I do understand the build process and have been talking about the potential for these types of breaches for over 25 years.

There have always been big security gaps and vulnerabilities in our most basic software development practice—the compile/link process. Most build processes are imperatively defined using an Ant, Make or Maven script. Each time the build runs, it rebuilds all the binaries, even if the corresponding source code has not changed.

This is a problem. Instead, we need a build that is incremental. Incremental builds are difficult to script because they need more advanced logic. Most developers are not given adequate time to create a better build process, one that only rebuilds the changes. This means it is next to impossible to audit why a binary was recompiled because they are always recompiled. If we only recompiled the binaries that had corresponding code updates, we could more carefully audit the code and make sure that only approved coding updates were included.

This may not have caused the SolarWinds attack, but it could have. Once hackers have gotten past the firewall they can find the build directory, read through the build scripts and identify the perfect piece of code to update with their bad function. The next time the build runs, a ‘clean all’ recompiles everything and, like dark magic, a SUNBURST is created.

In our not-too-distant past, the developer community had tools that controlled software builds beyond what an imperative script could do. OpenMake Software, for example, provided a build automation solution called Meister that ensured that incremental builds were easy. As part of the build, it generated SBOM reports and differences reports allowing teams to easily audit and compare the actual source changes to confirm that only approved code updates were included. Another example was Rational Software’s ClearCase which used a program called ClearMake. ClearMake used a process called ‘winkin’ to reuse ‘derived’ objects that had not changed. This process created an incremental build that could be easily audited.

I realize that I am simplifying the overall problem, but it is a good example of the basic supply chain best practices and how potentially vulnerable our processes are.

The time to carefully think about how we manage our raw materials and overall supply chain is now. I’m guessing that most monolithic practices will not change much; it is too costly and impactful to rewrite hundreds of CI pipelines. But that does not mean we can’t solve the supply chain puzzle as we shift into cloud-native development in a microservice architecture.

In a microservices architecture, we need to begin thinking about what the supply chain looks like. How do we track raw materials that are in the form of hundreds of small ‘bounded context’ functions? The good news is there is an emerging microservice catalog market that can become part of the supply chain solution. A microservices catalog can track versions of services to versions of their consuming applications. Your new SBOM report will have two levels: First, an SBOM for the microservice. Second, aggregated data at an application level. Remember, in this new architecture a full build is not done.

It’s important to understand that microservices are naturally incremental and are independently built, packaged and deployed. Managing these small services opens the door to more closely auditing their source code before they are released.

As we shift from monolithic development to microservices, we have an opportunity to incorporate supply chain and configuration management best practices. This includes being able to carefully audit the code before a release is done. And remember, we must do this as fast as possible. It sounds like a big task, but it’s well worth the effort. Collectively, we will find the answers as we explore new ways to manage our DevOps pipeline and incorporate configuration management and DevSecOps principles.

TRagan

Tracy Ragan is CEO and Co-Founder of DeployHub. She is expert in software configuration management and pipeline practices with a hyper focus on microservices. She currently serves as a board member of the Continuous Delivery Foundation (CDF) and is the Executive Director of the Ortelius Open Source Project incubating at the CDF. Tracy often speaks at Industry conferences such as DevOps World and CDCon. She was recognized by TechBeacon as one of the top 100 DevOps visionaries in 2020.

Recent Posts

GitLab Adds AI Chat Interface to Increase DevOps Productivity

GitLab Duo Chat is a natural language interface which helps generate code, create tests and access code summarizations.

3 hours ago

The Role of AI in Securing Software and Data Supply Chains

Expect attacks on the open source software supply chain to accelerate, with attackers automating attacks in common open source software…

8 hours ago

Exploring Low/No-Code Platforms, GenAI, Copilots and Code Generators

The emergence of low/no-code platforms is challenging traditional notions of coding expertise. Gone are the days when coding was an…

1 day ago

Datadog DevSecOps Report Shines Spotlight on Java Security Issues

Datadog today published a State of DevSecOps report that finds 90% of Java services running in a production environment are…

2 days ago

OpenSSF warns of Open Source Social Engineering Threats

Linux dodged a bullet. If the XZ exploit had gone undiscovered for only a few more weeks, millions of Linux…

2 days ago

Auto Reply

We're going to send email messages that say, "Hope this finds you in a well" and see if anybody notices.

2 days ago