New Relic today shared a report based on anonymized data it collects that showed a 35% increase in the volume of logging data collected by its observability platform.
The report also identified logs generated by NGINX proxy software (38%) as being the most common type of log, followed by Syslog (25%) and Amazon Load Balancer (20%).
In addition, Fluent Bit is the most commonly used open source processor and forwarder tool (38%) among IT teams that have adopted the New Relic platform. Another 16% are using New Relic infrastructure agent, followed by 14% that send log data directly to New Relic via an HTTP endpoint.
The report also found 50% of all logs ingested by language agents come from Java applications, followed by .Net (26%), Ruby (22%) and Node.js (2%).
Jemiah Sius, director of developer relations at New Relic, said as more cloud-native applications are deployed, the volume of log data that is collected is only going to increase as open source OpenTelemetry tools are more widely adopted. In addition, IT teams will also be collecting metrics, traces and events to better pinpoint the root cause of issues using observability platforms such as New Relic One, he added.
Amazon Web Services (AWS) is now, of course, one of the biggest sources of log data. The New Relic report noted that the serverless AWS Lambda service is the most widely employed source of log data among AWS customers that employ the New Relic One platform. However, use of the AWS Firehose service for extracting, transforming and loading (ETL) data has grown sharply in the last year, noted Sius.
The New Relic report found while only 32% of New Relic accounts that use AWS have adopted Firehose, there has been a 62% increase in adoption year-over-year.
Log data is usually the first thing any IT team consults whenever there is an issue, but as the volume of log data increases it’s becoming more difficult to associate log data with specific IT events. The issue that each IT team will have to come to terms with is how much log data to store alongside metrics, traces and other data formats to help them proactively identify the root cause of an IT issue.
Observability platforms, in the meantime, promise to make it possible to query that log data to identify relationships between events. The goal is to enable IT teams to investigate issues before they can disrupt IT services rather than merely reacting to events after they have unfolded, said Sius.
The hope, of course, is that machine learning algorithms will soon automatically identify issues before most IT teams even know there is a problem. Given the overall complexity of modern IT environments, it’s unlikely most IT teams even know what queries to launch to identify the root cause of an issue. Instead, as machine learning algorithms become more familiar with what constitutes normal IT events, it will become simpler for those algorithms to surface the issues that are actually worth IT’s attention.