JFrog and NVIDIA today announced they have expanded the integrations between their software development platforms to now include the Enterprise AI Factory, a set of frameworks and blueprints for building artificial intelligence (AI) applications.
As a result, software artifacts created using the NVIDIA Enterprise AI Factory can be housed in the JFrog Software Supply Chain Platform. The JFrog ML platform, meanwhile, provides a registry for managing the lifecycle for building and deploying AI models by enabling versioning, provenance tracking, model promotion, and enforcement of security policies.
Additionally, JFrog metadata and promotion workflows ensure immutable AI artifacts can only move between stages when all quality, security, and legal checks have passed. AI models can also be cached, continuously monitored and updated to align with regulations. Critical patches and AI model updates can also be centrally managed similarly.
Finally, the JFrog ML platform also enables application development teams to enforce role-based access control (RBAC) to ensure that access to AI artifacts is traceable for compliance purposes.
Collectively, those capabilities ensure all AI software is signed, validated and approved before deployment.
Kristian Taernhed, senior technical alliance manager at JFrog, said the overall goal is to provide a repository that can be deployed in an on-premises IT environment to provide DevSecOps teams with a platform to manage all their software artifacts, including AI agents and models. That capability, for example, enables organizations to leverage scanning tools to prevent malicious AI models from being inadvertently incorporated into an application, he added.
Locking down the software supply chain used for building AI applications is crucial not just for organizations concerned about highly regulated industries but also in countries that have developed policies pertaining to how AI applications are developed, also known as Sovereign AI, he added.
Specifically, Sovereign AI mandates typically require AI data to remain within specified geographic or organizational boundaries and for strict controls across the AI software supply chain to be enforced.
It’s not clear to what degree machine learning operations (MLOps) workflows used to build AI models are being integrated into DevOps workflows, but for all software engineering purposes, those models are simply just another type of artifact that needs to be integrated into a build. Going forward, just about every application being built is going to have some AI capability, so the number of AI models that will need to be incorporated into a DevOps workflow is only going to continue to increase.
The bigger challenge may be melding the cultures of the data science teams that typically build AI models with the DevOps and platform engineering teams responsible for building and deploying software.
In the meantime, there will be no shortage of platforms for building AI models that will need to be integrated into DevSecOps workflows. Some organizations may be able to standardize on a few, but given the pace of innovation, the number of platforms being used to build AI models continues to increase. Most data science teams, much like their DevOps counterparts, prefer to retain control over the tools and platforms used to build those AI models, so enforcing a platform standard across every data science team is in the short term likely to be a challenge.
The good news, of course, is that with each integration, the level of expertise attained should, hopefully, make the next project that much easier.