JFrog at its annual swampUP conference unfurled a DevOps platform, dubbed JFrog Fly, designed to enable application developers to more easily integrate artificial intelligence (AI) agents within workflows at scale.
Via the Model Context Protocol (MCP) developed by Anthropic, JFrog Fly is designed to integrate with multiple AI repos and platforms such as Cursor, GitHub Copilot and Claude Code to enable DevOps teams to centrally manage the storing, sharing and serving of software components using semantic metadata to optimize deployments of releases in a way that is also integrated with package managers and GitHub repositories.
Additionally, JFrog revealed it has developed a set of artificial intelligence (AI) agents to automate software vulnerability remediation by leveraging analytics to apply policies as application developers write code and launched a JFrog AppTrust platform that provides a single source of truth for governance, risk management and compliance (GRC) teams. JFrog also launched an Evidence Ecosystem made up of JFrog AppTrust partners that includes GitHub, ServiceNow, SonarQube, Akuity, Akto, Coguard, Dagger, Nightvision, Shipyard and Troj.ai.
Finally, JFrog also announced it has revamped its catalog for artificial intelligence (AI) models based on a Secure Model Registry for applying governance policies and tracking costs across multiple models, including now NVIDIA Nemotron models, running in the cloud or any on-premises IT environment that can be deployed with a single click. In addition to being able to search and explore models based on tags, projects, and use cases using detailed model cards and metadata, it also makes use of JFrog Xray code analysis tools to discover vulnerabilities.
Eyal Dyment, vice president of products for JFrog, said JFrog Fly will provide DevOps teams with a platform that makes it simpler to reliably govern code created by either humans or AI agents across the entire software development lifecycle (SDLC).
Itβs not clear to what degree AI agents will be embedded with DevOps workflows but itβs probable that each software engineer will train multiple agents to autonomously perform a range of tasks that they will still need to verify and validate once completed. In time, itβs also likely that before too long specific AI agents trained to automate specific tasks, such as running application tests, will become a shared resource versus requiring each engineer to create their own AI agent to test applications.
As the workflows evolve, DevOps teams will also need a range of security and GRC capabilities to manage application development projects, noted Dyment. The overall goal is to improve both the quality and security of applications even as the rate at which they are being developed and deployed continues to exponentially scale, he added.
Eventually, the entire workflow for building, securing and governing applications in the AI era will become much more unified than it is today, said Dyment.
In the meantime, however, itβs already apparent that most DevOps workflows are not going to be able to scale in a way that would enable DevOps teams to dramatically increase the number of applications being simultaneously deployed and updated without first revisiting the tools and platforms that were used to construct their pipelines in the first place.