Tabnine today announced it extended its generative artificial intelligence (AI) platform that enables developers to write code faster to now include the ability to automatically generate tests for that code.
Available in beta, this capability promises to make it easier for developers to test code at the time it is being developed. That approach should make it simpler for developers to address issues without losing any context.
Brandon Jung, vice president of ecosystems and business development at Tabnine, said this latest addition to the Tabnine platform is part of a larger effort to extend the reach of the Tabnine generative AI platform across the entire software development life cycle. The testing capability is expected to be generally available in the third quarter, and Tabnine expects to also employ generative AI to automate a much wider range of tasks across the software development life cycle within the next 12 months.
The Tabnine platform supports multiple programming languages including Python, Java and JavaScript and is designed to be integrated with integrated development environments (IDEs) such as Visual Studio Code and Jetbrains. The company has also previously integrated its code completion tool with the GitLab continuous integration/continuous delivery (CI/CD) platform. The overall goal is to make it easier for developers to automatically write code based on custom models using approved source code hosted in a secure private repository, said Jung.
Generative AI creates a large language model that assesses the probability of what the next line of code will be based on what has preceded it. Tabnine is now extending the model it created to also generate tests for the code its platform helped a developer write. It’s not clear what impact this ability to automate test creation will have on the need for dedicated testing teams, but it is about to become considerably easier to shift more responsibility for testing further left toward developers.
It’s not likely DevOps teams will be replaced as generative AI is extended across workflows, but the overall size of those teams might shrink, noted Jung. DevOps teams will soon be able to do far more with fewer people, he noted. At the same time, the barrier to DevOps adoption will also fall as AI platforms make it simpler for more organizations to embrace DevOps best practices, added Jung.
The challenge will be making sure the data collected to train AI models is of a high enough quality to ensure the desired outcome, noted Jung.
One way or another, it’s now only a matter of time before generative AI capabilities are applied more broadly. Platforms such as ChatGPT developed by OpenAI are only the tip of the iceberg that impacts almost every manual process, including software development and deployment. The issue will be determining at what rate those innovations will become practical to employ. In the meantime, it is already apparent generative AI platforms are having a significant impact on the rate at which code can be developed. Inevitably, that means the amount of code moving through DevOps pipelines at any one time is about to increase significantly.