The rapid adoption of AI coding assistants like GitHub Copilot has transformed how software teams approach development tasks. But as these tools become more integrated into daily workflows, new research raises important questions about potential trade-offs between speed and quality in AI-assisted development.
The Hidden Cost of AI-Generated Code
A comprehensive study from developer analytics firm GitClear has uncovered concerning trends in code quality since the widespread adoption of AI coding tools. The Seattle-based company analyzed an impressive 153 million changed lines of code, comparing patterns from 2023 to those from previous years before AI tools became prominent in development environments.
The findings should give DevOps teams pause as they consider how to integrate AI into their development pipelines.
“What we’re seeing is that AI code assistants excel at adding code quickly, but they can cause ‘AI-induced tech debt,'” explained GitClear founder Bill Harding. “This presents a significant challenge for DevOps teams that prioritize maintainability and long-term code health.”
Key Findings That DevOps Teams Should Consider
The GitClear study identified several metrics that indicate potential quality issues with AI-generated code:
Rising Code Churn Rates
“Code churn,” defined as the percentage of code that gets discarded less than two weeks after being written, is increasing dramatically. The study projects this metric will double in 2024, creating substantial risks for DevOps teams deploying to production environments.
This rapid turnover suggests that while AI tools make writing code faster, the resulting output may require significantly more revisions before reaching production quality. For DevOps teams already balancing speed with stability, this adds another layer of complexity to the deployment pipeline.
Problematic Code Composition
Perhaps more concerning for long-term maintainability is the change in the composition of code additions. The study found that “copy/pasted code” is increasing at a faster rate than “updated,” “deleted,” or “moved” code.
“In this regard, the composition of AI-generated code is similar to a short-term developer that doesn’t thoughtfully integrate their work into the broader project,” noted Harding.
This trend aligns with what many experienced developers have observed: AI tools excel at generating new code snippets but often lack the contextual understanding needed to properly integrate with existing codebases.
The Technical Debt Accelerator
The concept of technical debt — the implied cost of future rework caused by choosing quick solutions now instead of better approaches that take longer — isn’t new to DevOps professionals. However, AI appears to be amplifying this challenge.
As Armando Solar-Lezama, a professor at MIT, colorfully described to The Wall Street Journal, AI is like a “brand new credit card here that is going to allow us to accumulate technical debt in ways we were never able to do before.”
For DevOps teams, which often bear the burden of maintaining and operating these systems long-term, this has significant implications. The speed benefits during initial development could be overshadowed by increased complexity during deployment, operations and future iterations.
Impacts on DevOps Culture and Practices
The findings also raise important questions about how AI tools might affect DevOps culture and practices:
Compensation and Incentives
“If engineering leaders are making salary decisions based on lines of code changed, the combination of that plus AI creates incentives ripe for regrettable code being submitted,” warned Harding.
This highlights a potential misalignment between traditional productivity metrics and quality outcomes that could undermine DevOps principles of shared responsibility and continuous improvement.
Code Review Processes
With more code being generated and submitted faster, DevOps teams may need to rethink their code review processes. The traditional approach of reviewing line-by-line changes may become impractical, requiring more automated quality gates and context-aware review tools.
Monitoring and Observability
The potential for increased defect rates and unintended side effects from AI-generated code suggests DevOps teams may need to invest more heavily in robust monitoring and observability solutions to catch issues in production environments.
Finding Balance in the AI Era
Despite these concerns, the study doesn’t suggest abandoning AI coding tools altogether. Instead, it points to the need for more thoughtful integration of these technologies into DevOps workflows.
A McKinsey study referenced in the research found that productivity gains from AI coding tools are possible but depend heavily on task complexity and developer experience. “Ultimately, to maintain code quality, developers need to understand the attributes that make up quality code and prompt the tool for the right outputs,” the McKinsey study concluded.
This suggests that DevOps teams can benefit from AI tools while mitigating risks by
- Establishing clear quality guidelines for AI-generated code
- Implementing stronger automated testing requirements for AI-assisted contributions
- Creating feedback loops that help developers improve their prompting techniques
- Adopting metrics beyond lines of code changed to evaluate developer productivity
- Educating teams about the specific strengths and limitations of AI coding assistants
According to Mitch Ashley, VP and Practice Lead, DevOps and Application Development, The Futurum Group, “AI-induced technical debt’ is a good way to describe the side effects of overinflated expectations of AI-generated code. Engineering managers and software developers inherently know that creating software is a highly iterative process, continually improving, optimizing and securing code before it moves to production. Most developers prefer working with their code over reading, understanding and fixing the code of others, including AI-generated code.Â
Software Engineering Instrumentation is a rapidly growing category that measures the metrics for understanding flow, output and productivity. Companies including GitClear, Code Climate, LinearB, Jellyfish, PluralSight Flow, Sleuth and Haystack are perfectly positioned to measure the output from using AI technologies in software development. “
The Path Forward
The GitClear research provides valuable empirical evidence that should inform how DevOps teams approach AI integration. While these tools offer unprecedented speed in generating new code, the potential impact on long-term maintainability deserves serious consideration.
“Fast code-adding is desirable if you’re working in isolation or on a Greenfield problem,” noted Harding. “But hastily added code is caustic to the teams expected to maintain it afterward.”
This perspective resonates deeply with DevOps professionals who understand that delivery doesn’t end at deployment. As AI coding tools become more powerful and prevalent, finding the right balance between velocity and quality will be essential to realizing their full potential while avoiding their pitfalls.
The challenge lies not in deciding whether to use AI in development, but rather in determining how to use it responsibly within a DevOps framework that values speed and sustainability.