Earlier this year, it was announced that the attack on IT management software provider SolarWinds had been used to compromise other organizations, including parts of the United States government. There were several reasons for alarm because of this news, but one of the biggest was the revelation that attackers breached SolarWinds’ software development process and pipeline. The level of concern was such that it led to a first-of-its-kind executive order on cybersecurity.
The security of the software development process has been a concern for development teams for decades, beginning with Ken Thompson’s thought experiment around hacking compilers to inject vulnerable code (PDF). As the software development process becomes increasingly automated, this means there is more to secure. Indeed, the biggest difference between Thompson’s “Reflections on Trusting Trust” and today is that much of our concern stems from just how much we, as development teams, must trust code that was not written by us.
The issue of code ‘not written here’ is bigger than just open source libraries – though those are a big part of the issue since a typical Java application is 97% third-party code by weight, according to the Veracode State of Software Security, 2020. Moreover, modern cloud-native applications are also comprised of additional types of code written by others, including container images, serverless code and other cloud-native artifacts.
As a result, an attacker could compromise a development pipeline by one of a variety of means:
- Poisoning an upstream open source repository by taking over an open source project, or by typosquatting
- Compromising a container image repository
- Gaining access to the continuous integration server and making changes in the pipeline to incorporate malicious code
- Gaining access to the source repository, identifying an existing vulnerability (zero-day) in an application, and exploiting it in production
So, what must be done to mitigate these threats? A lot of the answer comes down to testing, managing the chain of custody and monitoring access to internal resources.
- Managing access—Systems that touch your software development process provide a door that, if not properly secured, may allow an attacker to walk in and compromise the software you ship to your customers. Things to be aware of here include the source control system, the continuous integration server and other tools (including quality and security tools) that have access to the source repository. Similarly, any activity that touches the source code, especially commits to the source code repository, must be properly authenticated with a developer GPG key. This prevents spoofing a user’s identity by setting git configuration parameters (this blog post from Alessandro Segala describes the problem nicely) or by stealing credentials.
- Manage the chain of custody for software dependencies—Most modern software dependency managers allow connections to multiple software registries, including public registries that can be easily attacked or compromised. Configuring dependency managers to only allow connections to an authorized list of registries can help keep compromised packages from entering the build pipeline. This can also help to ensure that only dependencies that are free of critical and high vulnerabilities can be used in the application. This topic also relates to the next one on testing for vulnerabilities. Part of managing code coming from other parts of the software supply chain is looking for security vulnerabilities in the code. Consider a security scanning tool that can look for vulnerable open source libraries or container images with vulnerabilities and security misconfigurations.
- Testing for vulnerabilities —Code introduced by an attacker may introduce new vulnerabilities. Using static application security testing (SAST) can help to identify serious security issues, including poor cryptographic practices, hard-coded credentials, and injection vulnerabilities. Performing SAST in the pipeline allows for the identification of critical and high severity defects early, enabling you to fail a pipeline upon their discovery and preventing insecure code from being deployed into production. Likewise, as noted above, testing for vulnerable containers and open source libraries can prevent vulnerabilities from being introduced via the software supply chain. Finally, using dynamic application security testing (DAST) to perform an end-to-end runtime test for a web application in a pre-production environment can help identify other exploitable issues in the end-to-end application. It can also find problems with application configuration that are only observable in the deployed environment. Both SAST and SCA might identify weaknesses in software that, while indicative of poor coding processes, may not be vulnerabilities. So, selecting tools that allow you to set a baseline of acceptable findings can help to ensure this control is adopted successfully.
Solving all these problems requires thinking about different parts of the application development process as a complete system. To that end, Veracode collaborated with Venafi, Sophos and CloudBees earlier this year to put together a proposed blueprint for secure software development pipelines. The proposal is maintained on GitHub and is available for users to raise issues or propose pull requests on the blueprint—all input is welcome.
The importance of getting this right cannot be underestimated. Hacks and breaches continue to hit the headlines, and putting in place the right tools, technologies and processes to minimize security risk in the software development pipeline is more critical than ever.