A survey of 500 developers conducted by the market research firm OnePoll on behalf of Sauce Labs found more than two-thirds (67%) admitted they pushed code into a production environment without testing, with more than a quarter (28%) acknowledging they do so on a regular basis.
More troubling still, 60% also admitted using untested code generated by ChatGPT, with more than a quarter (26%) doing so regularly.
More than two-thirds of developers have also merged their own pull requests without a review, with 28% confessing they do so often or very often.
Finally, the survey found three-quarters of developers admitted to circumventing security protocols, with 39% of developers doing so routinely. A full 70% also acknowledged using a coworker’s credentials to circumvent restrictions for access to data and/or internal systems, with 41% doing so regularly.
Jason Baum, director for community for Sauce Labs, said that while the survey makes it clear there are a lot of instances where best practices are being ignored, much of this behavior can also be attributed to the amount of extra work developers now routinely take on. The survey found more than three-quarters of developers (77%) have, for example, assumed more responsibility for testing in the last year.
The issue organizations need to evaluate is how much of this is due to developers being lazy versus looking for shortcuts in an era where too many tasks have been shifted left to developers and which they are not able to absorb, said Baum. Many organizations might have better outcomes if more tasks were automated within the context of a DevOps workflow to reduce the amount of cognitive load being placed on developers, he added.
The simple fact of the matter is there is a fundamental skills gap, said Baum. The number of so-called full stack developers capable of managing the entire software development life cycle is relatively small, he noted.
In the meantime, the amount of code generated is steadily increasing as developers take advantage of generative AI platforms to improve productivity. The challenge is the large language models (LLMs) being used to create that code were trained using examples of code pulled from across the web. In addition to having known vulnerabilities, it is of varying quality. In fact, those LLM platforms will frequently generate different snippets of code that vary in quality at different times.
DevOps teams that are responsible for managing the overall codebase are now finding their pipelines are becoming overwhelmed. That’s becoming especially problematic at a time when governments around the world are crafting legislation that will hold organizations more accountable for the security of the applications they build and deploy.
Eventually, AI will also be used to increasingly automate DevOps workflows and should enable software engineers to keep pace with the accelerated rate at which code is being written. In the longer term, LLMs that have been trained using vetted code will also generate higher quality code than a general-purpose LLM such as ChatGPT. In the meantime, however, massive amounts of untested code will likely create a series of downstream cascading events that will come back to haunt DevOps teams in the months and years ahead.