Cribl this week added support for multiple additional platforms to its cloud service for collecting and routing telemetry data collected from multiple DevOps tools and platforms.
The latest addition of Cribl Stream adds support for Microsoft Azure and improves its ability to search data stored in the data lake from Snowflake.
At the same time, Cribl can now also collect data via the Distribution application programming interface (API) made available by Datadog and from the ServiceNow Observability platform the company gained when it acquired LightStep.
Finally, Cribl Edge has been updated to provide an ability to monitor the health status of nodes and fleets of data sources, while Cribl Lake has added a Hybrid Worker Group capability to better enable teams to write, replay, and mix and match data sources.
Vlad Melnik, vice president of business development and alliances for Cribl, said the latest update to Cribl Streams makes it simpler to configure the platform to route data to the Microsoft Azure cloud. Previously, the platform has only provided integrations with the Amazon Web Services (AWS) cloud.
Of course, some DevOps teams have already integrated Cribl Streams with the Microsoft Azure cloud, but now that capability is built into the platform in a way that is managed by Cribl, said Melnik. However, as Azure Event Hubs is already widely used to collect log data collected from across the Azure service, Cribl is now making it easier to collect and normalize telemetry data collected from anywhere, noted Melnik.
DevOps teams are collecting more telemetry data than ever, in part thanks to the rise of Open Telemetry, open-source agent software that reduces the total cost of collecting telemetry data generated by multiple platforms and applications. The challenge is finding a way to streamline the collection of telemetry data in a way that makes it simpler to normalize regardless of how it was generated. That’s especially critical when, for example, DevSecOps teams are trying to identify the root cause of a potential cybersecurity incident, noted Melnik.
Aggregating all that data is also taking on added urgency in the age of generative artificial intelligence (AI). As DevOps teams look to operationalize these platforms, they need to be able to expose AI models to telemetry data collected from across highly distributed computing environments.
It’s not clear how DevOps teams are being expanded to add data management expertise. In some cases, they may add data engineering expertise, but in more instances, existing DevOps engineers are likely extending their skills into the realm of data management.
One way or another, DevOps teams will need to find ways to streamline the management of telemetry data that continues to grow in volume. Each DevOps team will ultimately need to determine how long it needs to store all that data but, as always, the more regulated an industry is the more onerous the telemetry data storage requirements become.
Ultimately, any effort to truly optimize application performance starts with telemetry data. The issue is not so much finding that data but rather routing it all to a place where there are tools capable of making sense of it all.