CloudBees today began providing early access Model Context Protocol Server to its Unify platform for centrally managing DevOps workflows across multiple types of continuous integration/continuous delivery (CI/CD) platforms.
Originally developed by Anthropic, MCP has emerged as a de facto standard for integrating artificial intelligence agents with each other and various data sources.
Shawn Ahmed, chief product officer for CloudBees, said MCP will enable CloudBees to extend the reach of its Unify platform to AI agents that are becoming incorporated into DevOps workflows. The CloudBees MCP Server provides a lightweight, standardized interface between large language models (LLMs), such as OpenAI’s GPT and Anthropic’s Claude, and enterprise DevOps tools and processes, he added.
The overall goal is to make it simpler to orchestrate and automate multi-step tasks spanning multiple AI agents by making pipeline, test, security data and associated metadata, including metrics, available to AI agents via the CloudBees Unify platform, noted Ahmed.
Launched last month, CloudBees Unify is a software-as-a-service (SaaS) platform for centralizing the management of multiple DevOps environments, including GitHub, GitLab and the open source Jenkins continuous integration/continuous delivery (CI/CD) platforms. It enables DevOps and platform engineering teams to centrally manage application development environments more easily via a control plane for CI/CD platforms through which DevOps teams can govern and manage multiple platforms.
In effect, CloudBees Unify provides an operating layer on top of any existing toolchain that makes it possible to both surface analytics in real time, run tests and apply policies that require the running code scans and implement compliance policies. That capability can now also be extended to AI agents to ensure governance policies are followed as tasks are automated, noted Ahmed.
It’s not clear how widely AI agents are being deployed across DevOps workflows, but a Futurum Group survey found 41% of respondents expect generative AI tools and platforms will be used to generate, review and test code. The challenge now is determining to what level tasks can be assigned to AI agents in a way that will be reliably implemented. The issue isn’t so much whether AI agents will be used as much as it is to what degree. Additionally, DevOps teams will then need to determine how potentially hundreds of AI agents might be employed alongside the engineers, who, in addition to continuing to perform some tasks themselves, may need to manage a small army of AI agents.
Ultimately, each DevOps team will need to determine their level of comfort with AI, but the one certain thing is that there is no shortage of manual DevOps tasks to be automated. It’s not likely AI agents will replace the need for software engineers but the nature of the role will undoubtedly change, especially as AI agents increasingly eliminate the need to write custom scripts.
In the meantime, however, DevOps teams would be well-advised to revisit how their existing pipelines are constructed. After all, the more brittle those pipelines are, the more likely it becomes that AI agents will stress them to the point of breaking.