Zencoder today made available a public beta of an artificial intelligence (AI) agent that extends its platform for writing code into the realm of application testing.
Company CEO Andrew Filev said Zentester in addition to making it simple for application developers to test their own code, will augment DevOps teams by making it possible to validate software works as intended faster.
Zentester sees and interacts with applications in the same ways as an end user who clicks on buttons, fills out forms to validate the state of a user interface and backend responses without having to rely on any scripting framework.
In the same way that Zencoder is integrating its portfolio of AI agents with software engineering workflows, ZenTester can be embedded into an integrated development environment (IDE) or embedded within a DevOps workflow using a command line interface (CLI) tool provided.
The overall goal is to speed the pace of testing in an era where most organizations don’t have the resources to test every update and component, said Filev.
In addition, test results are provided nearly instantaneously, versus requiring developers today to wait days for feedback from application testers, which means they usually have lost some of the context they had when they originally wrote the code being tested. ZenTester is designed to surface issues in minutes.
Finally, integration with AI coding tools will also make it easier for any AI agent that writes code to improve as it learns from Zentester.
Zencoder already makes available a platform for building customizable agents using a mix of open-source and proprietary large language models (LLMs) that can be shared via an open-source marketplace. At the core of those AI agents is a platform capable of analyzing interdependencies, generating documentation and suggesting improvements across service boundaries that Zencoder is now using to build agents trained to automate software engineering tasks.
The company also plans to expand the capabilities of these agents to interoperate with one another using the Model Context Protocol (MCP) developed by Anthropic, in addition to providing analytics and additional administration tools.
Longer term, Zencoder is working toward adding more advanced reasoning capabilities for its AI agents, including providing different classes of AI agents that can be invoked based on the level of complexity of the task, said Filev.
It’s not clear how many organizations are relying on AI to help build applications faster. Most enterprise IT organizations have yet to integrate AI tooling into their software engineering workflows. However, a recent Futurum Research survey finds 41% of respondents expect generative AI tools and platforms will be used to generate, review and test code, while 39% plan to make use of AI models based on machine learning algorithms.
Eventually, the pace at which code is being created using AI tools will overwhelm existing DevOps pipelines and workflows, so the need for AI agents to automate more tasks is becoming self-evident. In fact, there is already no shortage of AI tools and platforms for writing and testing code. Unfortunately, many organizations still lack a cohesive strategy, so many individual developers are naturally experimenting with different tools and approaches on their own. The challenge now is deciding which one of these tools and platforms best suits the needs of any DevOps team trying to determine how best to operationalize these tools at scale.