New Relic this week added support for the Model Context Protocol (MCP) to its observability platform to surface insights into artificial intelligence (AI) agents and applications.
Originally developed by Anthropic, MCP is rapidly becoming a de facto application programming interface (API) for enabling interoperability between AI agents and other sources of data.
New Relic has now incorporated MCP with its application performance monitoring to surface insights into AI agents and applications alongside its ability to monitor and observe legacy applications.
Previously, New Relic had added an ability to observe large language models (LLMs), but via MCP it’s now able to extend the reach of that capability to AI agents and applications that access those LLMs. That capability makes it possible to visualize specific usage and patterns of the entire lifecycle of an MCP request, including invoked tools, call sequences and execution durations.
Additionally, DevOps teams can identify which prompts are being invoked and what choices were made by an AI agent along with surfacing latency, errors and other MCP performance issues.
New Relic AI Monitoring MCP support is now available in Python Agent version 10.13.0, with additional languages coming soon.
In general, New Relic reports it has seen a steady 30% growth in usage of its AI monitoring tools quarter-over-quarter for the past year. The company has also seen a 92% increase in the number of unique AI models being used. However, the ChatGPT models provided by OpenAI accounted for 86% of all the LLM tokens being generated.
Nic Benders, chief technical strategist for New Relic, said that level of activity suggests that OpenAI is well on its way to becoming a de facto standard, even as the number of AI models being made available continues to rapidly increase.
It’s not immediately clear why one LLM might be preferred over another but OpenAI, in addition to making a steady stream of investments, has successfully generated a significant amount of brand recognition. That helps make OpenAI one of the first platforms that most organizations are going to look to when building an AI application, said Benders.
However, there are use cases, such as coding, that lend themselves, for the moment at least, better to coding, he added.
Less clear is the degree to which the various AI models that are available will gain enough traction to warrant the ongoing level of investment required to sustain them. IT teams should carefully consider what foundations they want to build and deploy AI applications on for the long term, noted Benders.
Additionally, not every AI application requires the latest, and most expensive, LLM, said Benders. The cost of previous generations of AI models has declined considerably as more advanced models have become available. Many applications, however, may not require access to a more advanced model, especially when a previous generation will enable a developer to automate a task at a much lower cost.
Ultimately, observability platforms will play a critical role in enabling organizations to track which AI models are being used for which use cases. The challenge now is understanding not only which ones are gaining traction among application developers, but just as importantly, the symbiotic relationship that exists between AI models and the IT infrastructure upon which they depend.