Tabnine has extended its alliance with Google Cloud to advance the adoption of generative artificial intelligence (AI) to automate the writing and testing of code.
The generative AI platform provider has already developed its own large language model that is hosted on Google Cloud. Tabnine is now also committing to leveraging the large language model that Google is developing to automate software development life cycle processes.
Brandon Jung, vice president of ecosystems for Tabnine, said it’s apparent that application development and deployment teams will be making use of multiple large language models across SDLC workflows. Not all those large language models will be developed by one single platform provider, he noted.
Rather than being dogmatic about large language models, Tabnine plans to make it possible to invoke multiple models via application programming interfaces (APIs) that are integrated with development environments, he added.
The Tabnine platform supports multiple programming languages, including Python, Java and JavaScript and is designed to be integrated with integrated development environments (IDEs) such as Visual Studio Code and Jetbrains. The company also previously integrated its code completion tool with the GitLab continuous integration/continuous delivery (CI/CD) platform. The overall goal is to make it easier for developers to automatically write code based on custom models using approved source code hosted in a secure private repository.
Generative AI creates a large language model that assesses the probability of what the next line of code will be based on what has preceded it. It’s not likely DevOps teams will be replaced as generative AI is extended across workflows, but the overall size of those teams might shrink as it becomes possible to do more with fewer people. At the same time, the barrier to DevOps adoption will also fall as AI platforms make it simpler for more organizations to embrace DevOps best practices.
The challenge will be making sure the data collected to train AI models is of a high enough quality to ensure the desired outcome. Organizations will need to find a way to apply generative AI to data models that have been validated by DevOps teams.
One way or another, it’s only a matter of time before generative AI capabilities are applied more broadly. Platforms such as OpenAI’s ChatGPT are only the tip of an iceberg that impacts almost every manual process, including software development and deployment. The issue will be determining how quickly those innovations will become practical enough to employ.
In the meantime, it is already apparent generative AI platforms are having a significant impact on the rate at which code can be developed. Inevitably, that means the amount of code moving through DevOps pipelines at any one time should increase significantly. DevOps teams should expect generative AI technologies to be applied both before and after application code is built and deployed, noted Jung.
In the meantime, the biggest issue is, arguably, keeping up with a rate of generative AI innovation that is occurring faster than many organizations can absorb.