PlayerZero has extended its namesake platform that leverages artificial intelligence (AI) to analyze codebases by adding the ability to simulate code created using generative AI coding tools.
Fresh off raising an additional $15 million in funding, PlayerZero CEO Animesh Koratana said CodeSim makes it possible to predict how that code will impact a codebase without having to rely on legacy testing tools that were created for a previous era of application development.
PlayerZero has developed a platform that makes use of AI agents to analyze a codebase, its history, the tickets created, runtime telemetry, documentation and user analytics. The code simulation capability extends that capability using Sim-1, an AI model purpose-built to refine code understanding into scenarios and concepts that can be used to simulate code behavior.
DevOps teams can then launch natural language queries to better understand how code works, why it may be broken and how it should be fixed, noted Koratana.
That will be crucial in the age of AI as the volume of code being created overwhelms legacy testing tools that will not be able to keep pace with citizen and professional application developers that are, for example, embracing vibecoding to rapidly prototype and build applications, he added. PlayerZero is specifically designed to onboard new developers in a matter of hours, said Koratana.
Most software engineering teams in the enterprise are already spending up to 70% of their time on activities such as investigating and fixing software bugs instead of coding. A different approach to testing software in the AI era that does not require DevOps teams to set up and maintain a separate unit testing platform will be an absolute requirement, noted Koratana.
It’s not clear at what pace software engineering teams are embracing AI to accelerate software development, but legacy platforms will not be able to add the same level of insight simply because they are not designed to analyze all the codebase data that needs to be pulled from multiple sources, said Koratana.
The one thing that is all but certain is the volume of code being generated will soon put existing DevOps workflows to the ultimate test. Historically, whenever there is a major innovation most organizations use it to do things faster using the same processes. However, eventually there comes a time when the processes themselves are reinvented to realize the ultimate benefit of the innovation. Arguably, in the case of AI it won’t be too long now before software engineering teams start to revamp the entire process of building and deploying software on an end-to-end basis.
In the meantime, DevOps teams might want to start identifying which elements of the application development process can be reengineered today with an eye toward how other processes might one day be similarly transformed for the AI era. After all, DevOps, if anything, has always been about making a commitment to continuous improvement, which ultimately means there should never be any sacred cows when it comes to reliance on a particular tool or platform.