DevSecOps

Safeguarding Secrets Within the Pipeline

How can an organization write source code to use secrets without directly referencing them and risking them being leaked to the public?

2020 closes out a decade full of countless code leaks, a topic we talk about quite often. In fact, just recently, Intel became the latest victim after 20GB worth of internal documents were leaked and uploaded to a public sharing site.

Intel finds itself in good company as it joins a long list of 2020 victims whose confidential source code has involuntarily gone public.

Many, if not most, of these victims share one common fatal flaw: They hard-coded secrets (logins, tokens, etc.), then published their source code to a trusted cloud computing service platform such as an online Git repository.

It would be naive to lay blame solely on the platform, however. It has long been a reliable rule of thumb that anything you put online—gated or not—is always at some level of risk. With this in mind, one should never, under any circumstances, include secrets in their source code. Stealing source code is damaging enough; the keys to your kingdom should not be included.

That’s easier said than done, of course. How do you write source code to use secrets without directly referencing them? In this blog, we’ll show you an overview of how you can utilize your continuous integration and delivery tools to inject secrets into your code at build/runtime without actually including them in the code itself, thus placing a veil between your code and your secrets.

Injecting Secrets

One method for publishing source code reliant on secrets without actually including the secrets in the code is to use placeholders within the source code which, at build time, are replaced with the actual values needed.

Microsoft Partner Technology Strategist Primoz Kocevar details such a method in his Medium post. The general idea is that secret variables are provided to a Docker image so that no secrets are found within the source code itself.

An environment variable, set up beforehand, stores the secret. This variable is defined in a Docker file, which is used to create an image. During the build phase, secrets are then passed via the Docker image through the variables specified in the source code. Once complete, the application and image can be deployed.

If you happen to run your images on Kubernetes, you can also leverage its own feature for securely storing secrets. This method allows you some additional options, such as using a Kubernetes environment variable or ConfigMap within a Kubernetes pod to store secrets for injection at build time.

By the way, this approach to secret injection is by no means exclusive to Docker or Kubernetes. Though we highlight them here as specific examples, the same method can be employed via virtually any similar solution you rely on for deploying pods or containers.

Injecting secrets via clever use of environment variables and configuration files in your microservices platform of choice is certainly a viable option, but we don’t think it’s the best option. Consider that this approach requires one image per each secret.  This can get cumbersome fast if you have a development environment with one secret and a production environment with another secret, each requiring different images.

Leveraging Your CI/CD Tools

First, a quick primer for the uninitiated: Continuous integration (CI) and continuous delivery (CD) tools and practices are an essential part of the larger software supply chain. Continuous processes are those that enable developers to publish code changes easily and frequently without degrading the product’s reliability.

Continuous integration tools and practices establish uniform processes for creating, testing and building source code, integrating with any required external systems along the way. Continuous delivery covers the activities that follow, streamlining and automating the delivery of source code to infrastructure, especially when multiple multiple environments are involved.

Many CI/CD tools provide their own built-in solutions for securing sensitive data. You can leverage these features when storing secrets until it’s time to inject them during the build.

For example, Jenkins provides its own credentials store and its own automatically encrypted Secret field type. (A word of caution: This alone doesn’t prevent a user from accidentally printing a secret in the console log.)

Another CI tool, CircleCI, offers a number of options for protecting secrets, such as secret masking to prevent secrets from appearing in terminal output and contexts, which limit access to certain variables to only specific user groups.

There are, of course, as many additional options as there are tools available.

We can leverage such features to enhance our previous example: secrets stored in your CI/CD tool can then be injected into our Docker run command (or your platform’s equivalent command) during deployment. In other words, your CI/CD tool of choice injects secrets into the image, which in turn injects the secrets into the corresponding source code’s variables. This method ensures our secrets never appear in the actual source code.

A word of caution: Even utilizing this approach, your secrets are still only as safe as the tool you use to secure them. The Bleeping Computer article we linked earlier in this post points out how weakly configured tools themselves pose a threat: “The leaks have been collected by Tillie Kottmann, a developer and reverse engineer, from various sources and from their own hunting for misconfigured DevOps tools that offer access to source code” (emphasis ours).

It’s important to stress the universal rule that there is no such thing as perfect security. Consider that encrypting your secrets within one of these tools offers little protection if access to those tools is not sufficiently restrictive. We recommend cautiously limiting who is granted what level of user access to each tool, applying the least privilege principle.

Protection Post-Deployment

Now that we have built and deployed our secret-free code, what’s next?

If we were to stop here, we would have overlooked the fact that any developer who has access to our containers can see our secrets. The downside to utilizing a microservice infrastructure is that there are many opportunities for security gaps to appear between all the moving parts, from the moment the first line of code is written to when it all goes live, which can easily go unnoticed.

Containers and pods need to communicate with each other and other endpoints, thus an attacker can leverage this by compromising one container or pod then moving laterally to others, rendering any secrets stored within ripe for the taking. The bigger your distributed infrastructure, the more difficult this is to prevent, because of the inevitable complexity of security policies required to do so.

Thankfully, many tools offer options such as built-in network/firewall policies between containers/pods and endpoints, an option to encrypt data at rest or a mechanism for enforcing the least privilege principle, such as via restricted security groups.

Some tools may also allow you to configure a Key Management Service (KMS) provider to encrypt your secrets. A KMS provider employs an envelope encryption scheme for data stored within etcd. The added benefit to this approach is that the decryption key is stored remotely on the KMS as opposed to on the same host.

Conclusion

Regardless of what tools you have in play along your development pipeline, secrets must always exist outside of the source code and should be (securely) injected into the code as it passes through the pipeline.

Wherever secrets ultimately live, be it in a remote storage location, a container or pod or within a CI/CD tool, they must always be encrypted and never stored in plaintext.

Finally, all the obfuscation and encryption in the world will not protect secrets from a lackadaisical user access policy. The least privilege principle should always be in play and user security groups should be strictly managed to control who has access to the systems responsible for securing your secrets.

All that said, mistakes still happen, and none of the tools discussed here perfectly ensure that no developer will nonetheless accidentally hard-code secrets into your source code. It is with that in mind that we advise adding another layer of security by automatically monitoring your source code repositories for secrets.

Dor Atias

Dor Atias is the VP Engineering at Cycode. Dor is a former officer in the Intelligence Corps Technological Unit with more than 10 years of experience in software engineering, DevOps, Cloud, and Saas. Dor was one of the R&D key members in the BlazeMeter acquisition to CA technologies in 2016 and led the platform group after the acquisition.

Recent Posts

IBM Confirms: It’s Buying HashiCorp

Everyone knew HashiCorp was attempting to find a buyer. Few suspected it would be IBM.

16 hours ago

Embrace Adds Support for OpenTelemetry to Instrument Mobile Applications

Embrace revealed today it is adding support for open source OpenTelemetry agent software to its software development kits (SDKs) that…

1 day ago

Paying Your Dues

TANSTAAFL, ya know?

1 day ago

AIOps Success Requires Synthetic Internet Telemetry Data

The data used to train AI models needs to reflect the production environments where applications are deployed.

3 days ago

Five Great DevOps Jobs Opportunities

Looking for a DevOps job? Look at these openings at NBC Universal, BAE, UBS, and other companies with three-letter abbreviations.

3 days ago

Tricentis Taps Generative AI to Automate Application Testing

Tricentis is adding AI assistants to make it simpler for DevOps teams to create tests.

5 days ago