At a FutureStack 2017 conference this week, New Relic previewed an implementation of distributed tracing software that has the potential to close the war rooms where IT operations assign blame for application failure.
New Relic CEO Lew Cirne says a Distributed Tracing capability that New Relic previewed this week will enable developers or IT operations teams to pinpoint the precise source of any given performance issue. Rather than wasting hours on root cause analysis, Cirne says DevOps teams will now be armed with dashboard they can navigate to visually inspect what elements of a distributed computing environment are specifically affecting their applications.
Based on the open-source Open Tracing software being advanced by the Cloud Native Computing Foundation (CNCF), Cirne says New Relic opted to embrace an open initiative that should lead to more actionable intelligence being derived from the data New Relic collects.
At this juncture, Cirne contends that deploying an application in production that has not been instrumented borders on the criminal. He concedes, however, the preponderance of applications deployed in production environments today are not instrumented largely because the total cost of instrumenting legacy applications has been cost-prohibitive. But as IT monitoring software becomes less expensive to invoke using agents that communicate back to a software-as-a-service (SaaS) application, most new applications being deployed in the cloud are now being instrumented.
To help further that goal, New Relic this week also announced support for .Net Core 2.0 framework recently made available on Microsoft Azure and added support for six additional Amazon Web Services (AWS) services. New Relic now supports a total of 25 AWS services.
At the conference Cirne also showed how IT organizations can now invoke New Relic application programming interfaces (APIs) to spin up a dashboard to track specific events associated with, for example, the rollout of a new application.
Access to the data New Relic surfaces, notes Cirne, in turn helps fuel adoption of DevOps because organizations gain confidence in their ability to more rapidly build and deploy applications. That confidence will only increase as IT organizations avail themselves of artificial intelligence (AI) software in the form of machine learning and, soon, deep learning algorithms to make more reliable decisions based on patterns in their IT environments. New Relic, he adds, is able to fuel those algorithms because the service collects 1.5 billion metrics and events every minute. That said, humans will still be required to manage the overall IT environment.
Cirne maintains that the best way to collect all that data is to employ agents, noting New Relic is adopting those agents to make them lightweight enough to even deploy on microservices based on containers.
New Relic naturally is not the only provider of IT monitoring tools investing heavily in advanced algorithms. In the not-too-distant future, it’s probable that algorithms deployed by different vendors will share insights with one another. In the meantime, IT administrators can take some comfort in the fact that it should soon become easier to manage IT at scale.