ContextQA is integrating the IBM watsonx.ai platform for building, accessing and deploying artificial intelligence (AI) models into its low-code/nocode automation platform for testing the front end of applications.
Previously, ContextQA had initially relied on Amazon Web Services (AWS) but decided to switch to IBM to reduce costs.
ContextQA CEO Deep Barot said in addition to providing better support, IBM is working more closely with ContextQA to expand the number of use cases where AI can be applied.
Ultimately, the goal is not so much to increase the number of tests being run as much as it is to make it simpler for DevOps teams to run the right test at the right time, he added. Ultimately, AI models should be able to run 80% of the most common tests, leaving DevOps teams more time to run additional use case tests that previously might never have been run, noted Barot.
In theory, AI models should improve the overall quality of the applications being deployed. The issue, however, is not only making sure that more applications are thoroughly tested but also making certain tests are conducted in the right order. In effect, DevOps teams need to be able to leverage different classes of AI models to orchestrate smarter testing processes within the context of a larger DevOps workflow.
The more complex the application being built and deployed the more critical that orchestration capability becomes, said Barot. That’s especially critical because as more tests are conducted using AI models the overall cost of testing will tend to increase, he added. In addition, advances in AI should make it simpler to build and deploy more applications than ever, all of which should be thoroughly tested before being deployed in a production environment.
It’s not clear for how long DevOps teams are going to be interested in which specific AI models are being employed to perform a specific task, but for now, most IT teams are keenly interested in knowing which LLMs they might be exposing sensitive data to as security and compliance issues arise, noted Barot.
Identify Opportunities for Automation Enabled by AI Now
It may be a while before AI is pervasively applied across application testing but it’s now more a question of when rather if. Historically, any time a DevOps team fell behind schedule there was a natural tendency to reduce the amount of time allocated to testing. The trouble is over time that reduction in testing eventually results in an overall reduction in software quality, in an era where end users are less tolerant of bugs and flaws.
Hopefully, one day soon the number of issues that DevOps teams need to regularly troubleshoot will decline as the quality of the software being deployed improves. In the meantime, DevOps teams should evaluate existing DevOps workflows to identify opportunities for automation enabled by AI. It’s not so much that AI is going to replace the need for DevOps professionals any time soon, as much as it is determining which tasks to offload to machines in a way that enables DevOps teams to build and deploy applications at a level of unprecedented scale.