At a SpringOne at VMware Explore event this week, VMware demonstrated a tool that uses abstractions to make it simpler to build Java applications that invoke multiple artificial intelligence (AI) components with minimal changes to code.
The goal for the open source Spring AI project is to, for example, make it simpler to swap out AI models using an AiClient interface for invoking OpenAI and Azure OpenAI services. It also adds capabilities that promise to make it simpler for developers to query documentation using generative AI enabled by large language models (LLMs). An additional goal is to extend those capabilities to other Spring projects, including Spring Integration, Spring Batch and Spring Data.
In addition, the Spring command line interface (CLI) has been extended to support Spring AI and the Spring Boot framework has been extended to help set up dependencies and classes more easily.
The AI capabilities are part of several additions made to the widely used framework for building Java applications that address everything from automatically turning off instances of Spring running in a container, also known as scaling to zero, to using virtual threads to call external services in a way that reduces memory consumption.
Those additional capabilities promise to make it more efficient to run Java applications in, for example, a cloud computing environment.
Betty Junod, vice president of product marketing for modern applications and the Management Business Group at VMware, said generative AI capabilities should serve to make Spring more accessible by not having to click around as much to gain answers to questions.
In general, VMware is betting that a larger percentage of the Java applications built using Spring will be deployed on the Tanzu platform the company has built on top of Kubernetes. That platform streamlines the application development process in a way that reduces the cognitive load on developers by eliminating the need to know how to work with YAML files or use Docker Compose to create containers.
Despite the proliferation of alternatives to Java, it remains the primary programming language used by developers to build applications. The challenges involved with learning additional programming languages are high, so most developers would prefer to continue to use Java to build both monolithic and emerging cloud-native applications.
In the meantime, AI is already widely employed to help developers write code faster. AI isn’t going to replace the need for developers, but it will increase the pace at which applications are being developed. As such, DevOps teams will soon see a significant increase in the amount of code moving through DevOps pipelines. The hope is that AI will be applied in a way that enables those DevOps teams to manage increasingly larger codebases.
In fact, the way applications are built and deployed will drastically change now that the AI genie is out of the proverbial bottle. But it remains to be seen how quickly those advances will change the way DevOps is managed at a level of unprecedented scale.