Blogs

Splunk Adds Data Management and AI Tools to Observability Portfolio

Splunk, a Cisco company, this week at its .conf24 conference announced today added a suite of data management tools that make it easier to share and process data between Splunk Cloud Platform and Splunk Observability Cloud.

In addition, Splunk also unveiled the AI Assistant in Security that makes use of generative artificial intelligence (GenAI) to streamline incident investigations using a natural language interface to provide summaries and recommendations and invoke the Splunk Search Processing Language (SPL).

The data management tools added to the Splunk portfolio promise to make it possible for IT organizations to preprocess via a single pipeline to provide a consistent level of visibility across multiple DevOps, security and IT service management (ITSM) workflows. Specifically, Splunk has added Pipeline Builders based on its SPL2 data search and processing language to filter, mask, transform and enrich their data along with an Ingest Processor service to convert logs to metrics and route them to Splunk Observability Cloud, Splunk Cloud Platform or the Amazon S3 service. There is also now an Edge Processor that IT teams can alternatively deploy themselves to provide more control over how data is processed and routed.

Finally, Splunk committed to previewing a Federated Analytics service this summer to analyze data residing in Splunk and external data lakes, starting with Amazon Security Lake.

Tom Casey, senior vice and general manager for product and technology said there is a direct correlation between improving visibility and observability in collaboration with Cisco and reducing downtime. A study published today by Splunk, in collaboration with Oxford Economics, finds that 74% of 1,400 technology executives surveyed experienced delayed time-to-market, while 64% experienced stagnant developer productivity, as a result of downtime. A total of 44% of downtime incidents stemmed from application or infrastructure issues such as software failures, compared to 56% resulting from security incidents such as phishing attacks. In addition, 41% admit customers are often or always the first to detect downtime.

Overall, the report estimates downtime costs global 2000 companies $400 billion annually, including lost revenue at $49 million, regulatory fines that average $22 million per year, ransomware and extortion payouts at $19 million annually and service level agreement (SLA) penalties at $16 million. It can take 75 days for a global 2000 company to recover that revenue, the report noted.

The report also finds that the top 10% of organizations that are more resilient than others in the study have, on average, spent $12 million more on cybersecurity tools and $2.4 million more on observability tools. Overall, resilience leaders’ mean time to recover (MTTR) from application or infrastructure-related downtime is 28% faster than the other organizations and 23% faster from cybersecurity-related incidents. Overall, resilience leaders lessen revenue losses by $17M by reducing the financial impact of regulatory fines by $10 million and ransomware payouts by $7 million.

Probability of Incidents May Increase

It’s not clear what impact artificial intelligence (AI) may have on lowering the costs of IT incidents. However, in the months and years ahead the one certain thing is that as the pace as more applications are deployed the probability there will be more incidents only increases. While advanced AI should make it simpler for IT organizations to successfully enable more applications at scale, the overall application environment is going to be that much more complex as the number of dependencies between applications and services only increases.

Mike Vizard is a seasoned IT journalist with over 25 years of experience. He also contributed to IT Business Edge, Channel Insider, Baseline and a variety of other IT titles. Previously, Vizard was the editorial director for Ziff-Davis Enterprise as well as Editor-in-Chief for CRN and InfoWorld.

Recent Posts

Pair of Surveys Surface Primary Developer Productivity Challenges

A survey of 900 developers finds well over two-thirds (69%) said developers are losing eight or more hours of their…

7 hours ago

4 Reasons Why Tech Leaders Should Prioritize the Testing & Mocking Phase for Better Development

Automated testing and mocking need to be the most prioritized area in your SDLC to eliminate friction for your developer…

12 hours ago

Implementing Threat Modeling in a DevOps Workflow

Integrating threat modeling into the DevOps workflow is essential to identify and mitigate potential security threats.

14 hours ago

Valory Unveils AI Software Engineer for Building Multiple Types of Agents

Valory has launched an artificial intelligence (AI) agent, dubbed Propel Genie, that is specially designed to act as a software…

18 hours ago

Tricentis Acquires SeaLights to Map Code Changes to Tests

Tricentis today revealed it has acquired SeaLights, a provider of a software-as-a-service (SaaS) platform that uses machine learning algorithms to…

18 hours ago

Real-Time Monitoring of Third-Party APIs: Benefits and Implementation

Exploring the benefits of real-time API monitoring for third-party integrations and provides a guide on implementing it effectively.

2 days ago