JFrog CEO Shlomi Ben Haim told attendees of the company’s swampUP 2024 conference that unless application developers adapt their jobs are indeed at risk because of the rise of generative artificial intelligence (AI).
Multiple surveys of CIOs and business leaders clearly show that generative AI is the top investment priority for most organizations, says Ben Haim. The amount of code being generated with the aid of generative AI is only going to increase as business and IT leaders look to spur innovation, he added.
Additionally, DevSecOps teams are now being asked to support data scientists who are building and deploying AI models in production environments alongside other classes of artifacts. Most of those models are being deployed using practices that are the same as the DevOps and DevSecOps workflows already in use, noted Ben Haim.
As such, the integration of DevOps and machine learning operations (MLOps) practices is inevitable, he added.
JFrog has been investing in MLOps for more than a year now and most recently acquired Qwak AI in addition to integrating its DevSecOps platform with the NIM microservices provided by NVIDIA. Data scientists are simply another persona that needs to be incorporated into the DevSecOps team, noted Ben Haim.
At the same time, DevSecOps should be moving to consolidate their tools and platforms to both reduce costs and optimize workflows, he added. There is simply no way a DevOps team is going to master, for example, 20 different tools and platforms, noted Ben Haim. An integrated DevSecOps platform that includes, for example, extensions to source code managed in GitHub repositories that make DevSecOps teams much more productive by providing them with a single source of truth, said Ben Haim.
It’s not clear at what pace organizations are melding DevOps and MLOps workflows but as AI models are increasingly infused into applications, they will soon become just another type of software artifact that needs to be deployed. While many of the first wave of AI models might have initially been deployed by data scientists working with data engineers, there simply isn’t enough of that expertise available to deploy AI models in production environments at scale.
Regardless of how AI models are deployed, the one certain thing is there will soon be a lot more of them. Organizations have spent the better part of 2024 determining how best to operationalize generative AI with an eye towards deploying them in production environments in the months ahead. In addition to using retrieval augmented generation (RAG) techniques to further customize AI models, many of those AI models over time will need to be replaced by AI models that have been trained using more current data.
The challenge, as always, will be defining the workflows needed to automate as much of the process as possible in an era where it’s expected the amount of software being deployed in the next few years will far exceed all the software that has been deployed in the past decade.