Perforce Software today added an artificial intelligence (AI) agent that autonomously adapts tests it has created as changes are made to mobile computing applications.
Don Jackson, technical evangelist for Perforce, said via a natural language interface it’s now possible for DevOps teams to leverage an AI model that Perforce developed to generate tests using a Perfecto AI test automation platform that already eliminates the need to create test scripts.
Based on a proprietary AI model developed by Perforce Software, that approach enables the AI agent to autonomously make adjustments in real time as changes to the user interface (UI) or user flows of an application are made, he added. The Perforce AI agent makes decisions and takes actions based on the state of the artifact under test (AUT) to achieve the objective described by a tester prompt, said Jackson. It not only provides that level of autonomy to create tests, but also at execution time, he noted.
Whenever the AUT changes, as long as the objective is still achievable through user actions, the Perforce test will not break, but look to find a new way to navigate through the AUT to achieve the test objective, noted Jackson. As a result, the test case being automated doesn’t require any knowledge of a programming language, installation, selection, knowledge of, or reliance on any specific framework, and doesn’t break when the AUT changes, meaning that it requires no maintenance, he said.
The overall goal is to reduce the level of maintenance that is currently required to build and run tests that typically break each time an application is updated, he noted. In fact, it’s now possible to use an AI agent to create a test for an application before any of the code for it is ever written, said Jackson.
Finally, that approach also makes it possible to automate tests that previously could only have been conducted running manual tests, he added.
Mitch Ashley, vice president and practice lead for software lifecycle engineering at Futurum Group, said Perforce is charting a new course in AI for software testing and quality. Rather than acting as a peer programmer or tester, Perfecto AI works from the underlying requirements, such as user stories to analyze and determine the testing strategies. That approach allows it to choose the best testing tool or technique and then create the tests for that platform, he added.
In theory, AI agents should make it simpler for DevOps teams to run more tests, resulting in higher quality applications being built and deployed. However, it’s not clear how quickly DevOps teams are embracing AI. A recent Futurum Group survey, for example, finds 41% of respondents expect generative AI tools and platforms will be used to generate, review and test code. The challenge right now is determining how much to rely on AI agents to autonomously create and test code versus requiring a DevOps engineer to review output created by AI models that, depending on how well they are trained, may or may not randomly hallucinate.