Centreon this week generally made available an open source agent for monitoring IT and operational technology (OT) environments that incorporates the open source OpenTelemetry agent for instrumenting applications that is being advanced under the auspices of the Cloud Native Computing Foundation (CNCF).
Raphaël Chauvel, chief product officer at Centreon, said now that the Centreon Monitoring Agent makes it possible to also collect log data using OpenTelemetry, with the next step being adding the ability to collect traces. Overall, the goal is to provide a unified platform for monitoring and observability that IT teams can flexibly apply as needed, he added.
Most IT teams will need to continuously monitor a set of pre-defined metrics, but they are also increasingly embracing observability platforms to gain access to analytics tools that make it simpler to troubleshoot applications. Rather than having to acquire, deploy and manage two separate platforms, Centreon is making a case for an integrated platform that, in addition to lowering total costs, also provides plug-ins for managing digital experiences, said Chauvel.
That’s especially critical as IT environments become increasingly more complex to manage and troubleshoot, he added.
At the core of the Centreon platform is a set of open source components for collecting data, transferring and normalizing it, and then accessing it via a user interface. That platform can be accessed as a software-as-a-service (SaaS) application or deployed in an on-premises IT environment to meet any sovereign cloud requirements. The company claims there are now more than 250,000 IT professionals contributing to that community.
It’s not clear to what degree IT teams have embraced observability, but as they increasingly unify the management of applications and IT infrastructure, the need becomes more apparent. The primary adoption issue has been both the cost of the observability platform and the developing of the skills required to analyze the data being collected. Fortunately, with the rise of artificial intelligence (AI), the level of expertise required to derive value from observability platforms continues to decline.
Regardless of approach, the overall stress level that IT teams experience every time there is an issue or outright outage should lessen as the amount of time required to investigate a problem is reduced. Historically, IT teams have spent weeks investigating issues that once discovered only take a few minutes to fix.
In the meantime, IT administrators in collaboration with DevOps engineers should be working toward defining a united approach to observability and monitoring. Observability, of course, has always been a core tenet of best DevOps practices, but the ability to achieve it has been at best uneven, largely because the cost of instrumenting applications and collecting the required telemetry data has been higher than many IT organizations could afford. However, with the rise of open source tools such as OpenTelemetry it is becoming more affordable to instrument a larger swath of the application portfolio. The challenge, as always, is determining which applications should be instrumented first based on total cost and their criticality to the organization.