Chronosphere today added additional capabilities to its log management platform that enable DevOps teams to both reduce costs and surface more actionable insights.
Alok Bhide, head of product innovation for Chronosphere, said Chronosphere Logs 2.0 provides DevOps teams with more granular control over how log data is collected and stored. The overall goal is to make it simpler for DevOps teams to not only leverage analytics to identify which log data is actually being used but also more easily correlate it with metrics, events and traces, he added.
Additionally, Chronosphere has added a Logs Quotas tool that enables DevOps teams to enforce budget restrictions on individual teams all the way down to specific datasets.
Most DevOps teams are already being overwhelmed by the amount of telemetry data being collected, an issue that will only become more challenging to manage as more cloud-native and artificial intelligence (AI) applications are deployed in production environments.
There is, of course, no shortage of observability platforms, but as IT environments become more complex, the need for tools that go well beyond simple monitoring of pre-defined metrics to troubleshoot applications has become a lot more pressing. The challenge is observability platforms collect data at levels of scale that from a cost perspective quickly add up. As a result, managing telemetry data has become a significant challenge. Unfortunately, most DevOps teams are not going to be able to hire someone to optimally manage the flow telemetry data. Instead, that needs to be a core capability of the observability platform.
Less clear in the longer term is to what degree observability will become a task assigned to an artificial intelligence (AI) agent rather than a DevOps engineer. There is no doubt DevOps engineers will need to review the findings of an AI agent that has investigated an incident, but much of the tedious effort required to sort through metric, events, logs and traces should become increasingly automated. Each DevOps engineering team will then need to decide how comfortable they are allowing another set of AI agents to automatically remediate issues as they are uncovered, noted Bhide.
In the meantime, organizations have never been more dependent on software than they are today, but even as the number of applications being deployed increases, the size of the DevOps teams tasked with maintaining them continues to remain relatively stagnant. In fact, in an AI era where more applications are expected to be deployed in the next few years than the entire last decade, supporting all those applications will require an ability to investigate and, if necessary, remediate anomalies within minutes of being discovered. Legacy monitoring tools were never intended to provide forensics insights in near real time.
Of course, convincing business leaders to invest in observability platforms can be a challenge. DevOps teams need to be certain that the cost of storing massive amounts of telemetry does not outweigh the benefits. After all, no matter how many issues an observability prevents, someone in the finance team will still be closely monitoring the cost of a platform that over time only continues to grow.