At its VMware Explore 2023 conference today, VMware previewed what will become a suite of intelligent assistants that use generative artificial intelligence (AI) to automate the management of a wide range of IT tasks.
Krish Prasad, senior vice president and general manager for cloud infrastructure at VMware, said VMware will eventually build out assistants for managing individual platforms such as VMware vSphere and the integrated VMware Cloud Foundation (VCF) platform through which the company aggregates those offerings into a single platform.
VMware is creating those assistants, also known as co-pilots, using multiple large language models (LLMs) that the company uses to train AI models using code written by site reliability engineers (SREs), said Prasad. That approach ensures that the recommendations being made along with the code generated are of the highest quality, he added.
That approach will enable IT teams to identify configuration drift that might be adversely impacting performance or that could create a cybersecurity risk, for example, noted Prasad. The first of those assistants is being made available in beta for VMware Tanzu, a platform based on Kubernetes. VMware is also building an assistant for its NSX virtual networking software that analyzes security events to generate policies and rules that can be applied to firewalls in real-time as threats are identified.
It will then be up to each IT team to decide whether to implement those recommendations. Over time, it’s clear that the management of IT infrastructure is going to become increasingly automated as confidence in those recommendations increases, he added. In fact, the ultimate goal is to fulfill the promise of ‘lights-out’ IT management, said Prasad.
In effect, advances such as these will lead to the democratization of DevOps workflows as it becomes simpler to manage IT infrastructure at scale. IT organizations will be able to invest more resources in building and deploying applications rather than hiring specialists to manage IT infrastructure, noted Prasad.
In fact, the pace at which applications are built and deployed should dramatically increase in the months ahead, Prasad predicted.
The rise of generative AI creates a unique opportunity for IT teams to become more relevant than ever to their organizations. It will soon be possible to build and deploy a wide range of distributed applications capable of processing and analyzing data at the point where it is created and consumed.
While generative artificial intelligence (AI) has captured the popular imagination, it’s becoming apparent that multiple forms of AI will soon be used together to automate DevOps workflows at levels of unprecedented scale.
Machine learning algorithms, of course, are already used by many organizations to automate everything from analyzing log data to identifying security issues to surfacing patterns in tests made using multiple regression, function and unit testing tools. With the rise of generative AI, it is now possible to automatically generate code and create summaries of reports that multiple stakeholders can more easily understand. Generative AI doesn’t replace the need for previous use cases involving machine learning algorithms. Instead, generative AI makes use of a form of deep learning algorithms to create different types of AI models.
As a result, DevOps workflows will soon be invoking multiple classes of AI models to automate workflows in ways that should enable IT teams to automate processes at unprecedented levels of scale.