In the duality of continuous integration/continuous delivery (CI/CD), CI focuses on build automation. Leveraging CI practices for software delivery is normal for most organizations and can be seen as a solved problem. However, the demand for increasingly distributed applications has risen with the expansion of microservices, and development teams have the expectation that every commit should be a build.
Continuous integration has, surprisingly, become an unexpected bottleneck with these expectations. To begin modernizing CI, it’s important to first understand some of the challenges. Let’s dig in.
Challenges of Continuous Integration
Given that build and release candidates follow advancements in development technology closely — such as new languages, packaging and paradigms in testing the artifact — expanding the capabilities in continuous integration implementations can be challenging. With the introduction of containerized technology, the firepower and velocity required to build greatly increased.
Scaling Continuous Integration Platforms
The infrastructure required to run a distributed CI platform can be as complex as the applications they are building because of the heavy compute requirements. Consider how much a local machine’s resources are tied up during a local build and test cycle. Now, multiply that by the number of folks on a team or in an organization and you start to understand just how much compute power is being used. Distributed build runners are just one area that can be complex, and managing when new build nodes are spun up and spun down can depend on the platform/end user.
Keeping Up With the Technology Velocity
The adage of “the only constant in technology is change” is true. New languages, platforms and paradigms are to be expected as technology pushes forward. The ease of including new technologies in a heterogeneous build or accepting new testing paradigms, however, can be difficult for more rigid CI platforms. These legacy platforms were designed for only a small subset of technologies. With legacy or rigid approaches, the dependency management required to maintain technical velocity becomes a significant burden.
What it Takes to Modernize Continuous Integration
The simplest way to change continuous integration practices and platforms is to take what I call the four-pillar approach: platform infrastructure modernization, engineering efficiency strategy, test optimization and rapid pipeline development. Making strides in any of these pillars will put you on the path toward modernizing your continuous integration platforms and practices. Making strides in all of the pillars will set up your DevOps teams for significant success.
Platform Infrastructure Modernization
The location where the actual builds and packaging take place (e.g. build nodes/runners) do most of the heavy lifting and have a fairly elastic workload nature. Builds (e.g. a JAVA JAR build) and packaging (e.g. a Docker Image Compose) are compute-heavy tasks. Once the build and packaging are complete, the runners can sit idle. This shows the importance of having an ephemeral build node. The build node spins up during the task, then spins down or is destroyed after the build task is done, so as to not drain resources.
Engineering Efficiency Strategy
A core tenet of engineering efficiency is meeting your internal customers where they are. For software engineers, this means being as close to their tools and projects as possible. Like many modern pieces of application infrastructure, shifting left to the developer means being included in the project structure in source code management (SCM).
A common disconnect in CI platforms is dependency management. Over the past decade, for software engineers, this problem has been solved with package or build tools such as Maven, Gradle and NPM. Engineers need to simply define implicitly or explicitly what they need, and the dependencies will be resolved. Continuous integration tools can suffer from a disconnect, since we are leveraging several tools that potentially don’t have a common syntax. So, again, it’s important developers have close oversight on all the tools they need to complete their projects to reduce this friction.
Test Optimization
While initial confidence in getting tests into the CI pipeline expands, more tests — and sometimes inappropriate test coverage — is introduced by the ease of integration with the pipeline. An even harder problem to identify and rectify is flaky tests. A flaky test is one that both passes and fails periodically without any code changes. This becomes a twofold problem: increasing execution time and lack of confidence with flakiness. The most logical solution to avoid these issues is adding test optimization. A modern continuous integration solution should be able to visualize pipeline order, timings and overall execution to help identity (and eventually rectify) excessive coverage and flakiness.
Rapid Pipeline Development
Software is an exercise in iteration. The lower the barriers to entry for iteration to occur, the more gains in engineering efficiency and agility are achieved. Local builds happen dozens of times before reaching a committable stage, moving them forward to a dev-integration environment. It’s here where having a local environment is key. Oddly, most continuous integration pipelines are designed to run externally from a local machine. So, the ability to move to a local CI pipeline helps build confidence that the externally run pipeline will be successful.
Continue on Your Continuous Integration Journey
Continuous integration might seem like a solved problem for many organizations, but as with any technology, there is always room for improvement and modernization. Like any engineering efficiency paradigm, CI is a journey and the destination ever-evolving.