New Relic this week made available a live archives capability that provides instant access to historical logs for up to seven years and eliminates the need to rehydrate, reload, re-index or move data.
At the same time, the company also announced its observability platform is now integrated with infrastructure and observability tools provided by Sentry Software to help IT teams reduce costs and collect additional sustainability metrics alongside logs, metrics and traces.
Jemiah Sius, senior director of developer relations for New Relic, said overall, New Relic is moving to make it simple to analyze and store telemetry data regardless of when it was collected. For example, rather than requiring historical data to be accessed via a cold storage service in the cloud, the New Relic platform now makes that data available directly within the database that is at the core of its observability platform.
That approach also provides the added benefit of not incurring ingress and egress fees for moving data in and out of cloud storage services, noted Sius.
IT teams can simply create a New Relic Query Language (NRQL) to identify which logs to route to a live archive in as little as 30 seconds without having to add an additional log collector.
The overall goal is to make it simpler to investigate incidents faster, given the urgency that is created whenever there is an outage or audit request that needs to be immediately addressed, noted Sius. That’s critical because DevOps teams are always for time, he added.
In general, New Relic is reducing the overall toil associated that many DevOps teams encounter today when managing telemetry data. Rather than being confronted with a complex set of data engineering tasks, telemetry data from multiple applications and infrastructure platforms can automatically be collected, analyzed and stored.
It’s still early days as far as the adoption of observability platforms is concerned, but there is already no shortage of options. The challenge is that in addition to logs, metrics and traces, IT teams are now starting to also track cost and energy consumption metrics. Observability platforms provide the means to centralize telemetry data that is only going to increase in volume as more applications are deployed. The challenge has been framing the queries needed to surface meaningful insights. Fortunately, with the rise of generative artificial intelligence (AI) tools that make it simpler to create queries, the overall level of skill required to invoke an observability platform is dropping. In effect, generative AI is making it possible to democratize DevOps best practices.
Ultimately, existing DevOps teams should be able to manage highly distributed application environments without having to hire and retain a small army of additional software engineers. The issue is that, in many cases, the size of the overall code base is increasing rapidly as developers take advantage of various generative AI tools to help them write code faster. Hopefully, advances in AI will arrive soon enough to enable DevOps teams to cope with all that code before DevOps teams are inevitably overwhelmed.