Cribl today announced it has updated the Cribl LogStream observability tool in addition to generally making available a software-as-a-service (SaaS) offering dubbed LogStream Cloud.
Nick Heudecker, senior director for market strategy and competitive intelligence at Cribl, said Cribl LogStream provides the means to aggregate log data collected from multiple platforms and applications in a format any observability or monitoring tool can access.
The challenge with observability platforms today is they leave it up to IT organization to copy and paste data into the platform. LogStream automates that process via a log router that collects and optimizes data streams from existing installed agent software and then shapes and routes that data on to analytics platform.
Version 3.0 adds LogStream Packs, a framework for accelerating deployments of the log router. Alternatively, DevOps teams can employ LogStream Cloud to build and share LogStream configuration models.
Heudecker said the goal is to reduce the cost of achieving observability by reducing the time and effort required to set up and manage data pipelines. No customization effort on the part of a DevOps team is required to collect and send data from hundreds of sources to any number of destinations, noted Heudecker.
While observability is a core DevOps tenet, achieving it has been a major challenge. In addition to deploying and managing agent software to instrument IT environments, there’s a significant amount of time and data engineering effort needed to set up the pipelines employed to transfer data to a platform where it can be analyzed. Cribl is making a case for automating those data engineering tasks using a tier of software that abstracts data into a format that can be consumed by multiple observability and monitoring platforms.
It’s not clear yet to what degree IT organizations perceive observability to be an evolution of monitoring, or simply the next logical evolution of monitoring. One way or another, IT teams need a lot more context than they have historically been able to attain using legacy monitoring tools. Each platform employed within an enterprise typically comes with its own monitoring tools that are used by the IT teams tasked with managing it. Whenever there is an issue, however, IT teams will spend hours correlating data from multiple tools to ascertain the root cause of a problem.
Observability platforms promise to reduce that time and effort by applying analytics to data collected from multiple platforms. The challenge now is finding a way to efficiently collect all that data.
It may be a while before IT organizations start to abandon all the monitoring tools they have in place in favor of observability platforms. However, as IT environments continue to become more complex (thanks, in part, to the rise of cloud-native computing platform based on microservices) the need for those platforms becomes more acute. The issue is enabling DevOps teams to achieve that goal without having to wait for an overworked data engineer to find the time to build a pipeline.