IBM has extended the reach of the generative artificial intelligence (AI) tools it provides to automate the writing of code to the Ansible IT automation framework.
Ruchir Puri, chief scientist for IBM Research, said watsonx Code Assistant for Red Hat Ansible Lightspeed provides a natural language interface that makes the domain-specific language used to create the Ansible playbooks that automate IT workflows much more accessible. That approach, in effect, will help democratize DevOps best practices by lowering the skills bar required to embrace Ansible, he added. In addition to making it simpler to employ Anisble more widely within enterprise IT organizations, it will also make it possible for smaller organizations with more limited programming expertise to embrace DevOps as well, noted Puri.
IBM has been working with its Red Hat arm on Project Ansible Lightspeed, an effort to apply generative AI to IT automation, since 2022. With the addition of watsonx Code Assistant for Red Hat Ansible Lightspeed, there is now also a generative AI tool for writing code that IBM trained using the Granite large language model (LLM) that is based on a decoder architecture capable of predicting what code comes next in a sequence.
The difference between the IBM approach and other copilot tools is that the LLMs used are trained by IBM using curated code to minimize hallucinations that result when general-purpose LLMs trained using conflicting data are used to generate code, noted Puri.
IBM is now similarly committed to applying watsonx Code Assistant to other domain-specific languages as part of a larger effort to reduce the cognitive load required to build and continuously modernize software regardless of what programming language was used to construct it. IBM last summer also previewed a tool for converting COBOL code into Java code that can run on a mainframe.
There is no doubt that generative AI will have a profound impact on how software is developed. The next major challenge will be to converge DevOps workflows with the machine learning operation (MLOps) workflows that data scientists and engineers employ to build AI models, said Puri. The goal is to streamline the deployment of AI models that will soon be embedded in almost every application, he added.
Organizations will also need to learn how to deploy and manage some type of vector databases to customize an existing LLM by presenting their own unstructured data in a format that an LLM can recognize. The LLM then uses that external data alongside the data it was originally trained on to generate better-informed responses and suggestions. Organizations can then go a step further by using a framework to build and deploy an AI application. Some organizations may even go so far as to build their own LLM to ensure the highest level of accuracy.
The number of organizations that have the data scientists, data engineers, application developers and cybersecurity experts required to build and deploy generative AI applications is still fairly limited, but as it becomes possible to employ natural language to write code, it’s now only a matter of time before it becomes much simpler to converge tasks that previously each required mastery of a domain-specific programming language.