A decade ago, if you were to ask developers what the biggest challenges in commercial banking were, they would likely cite the difficulty of accessing and collating high-quality data on the day-to-day operations of banking institutions. Today, that’s completely changed.
Or it has to some extent, at least. Developers working in the banking industry certainly have access to more data than ever before. But this has created challenges of its own. With so much data being collected, stored and at least partially processed, some developers have started to talk of a “data deluge”—an unstoppable torrent that overwhelms the developer’s ability to process and use it.
In this article, we’ll try and make sense of this. We’ll also explore what it is being used for and what this means for the future of the banking industry.
Promises and Challenges
Before we get too deeply into the specifics, however, let’s take a moment to note that the data deluge is not entirely new and that it will be with us for quite some time to come.
In fact, most guides to technology in the banking sector note that many of our current issues are caused by the sheer amount of data being collected. It has become acutely apparent that our data harvesting capabilities have grown far more rapidly than our ability to ensure accuracy or to label it in accordance with legally-mandated compliance frameworks.
And, of course, this is only going to get worse. No one knows yet how the global COVID-19 pandemic and ensuing economic downturn will affect the global payments market, but it was previously predicted to reach $2 trillion by the end of 2025, with a compound annual growth rate of 7.83%. Even realizing a portion of that growth will create real challenges for developers in the banking sector.
Using the Data Deluge
With all that said, however, let’s also recognize that this is a ‘good problem.’ It’s not necessarily a bad thing that we have access to so much data—the issue is making sure that it’s accurate, safe and useful.
And when it comes to doing that, three opportunities are already apparent. Here they are.
1. Increased Velocity
Many of the new approaches to leveraging data in the banking sector are focused on increasing speed. Indeed, the ability to access real-time intelligence in banking operations was one of the original promises of the big data revolution, albeit one that has taken some years to come to fruition.
There are several reasons for that, but one of the most important is the fact that banks have lagged when it comes to updating their hardware and software to correlate with the amount of information they are collecting. Most banks now undertake significant levels of quantitative analysis on business and customer data, for instance, but few do so in real-time.
Instead, many banking systems are still run on monthly data-processing schedules, meaning that it can take even the most technologically advanced bank 30 days to spot consumer or business trends. Implementing new technologies such as Apache Spark is one potential solution to this problem. Like Hadoop, it’s an open source big data analytics engine, but it’s faster, more scalable and easier to use.
2. More Reliable Governance
Another challenge when it comes to managing the deluge is that of governance. With so much data flowing in, it can be a challenge for developers and operations staff alike to keep track of its source and how it should be managed within data compliance frameworks.
This is one reason, as we’ve long argued, that banking should consider DevOps. By uniting developers and operations staff, both can come to a deeper understanding of data compliance and how this must be integrated into every aspect of the contemporary banking industry.
This is one area, however, in which more data is not necessarily a bad idea. Rather than pausing data harvesting because of concerns that it will create compliance issues, developers should aim to build systems that can carefully and reliably label this data throughout the custody chain.
3. Focus on the Customer
Finally, developers shouldn’t lose sight of the ultimate aim of collecting data—improving the customer experience. Data can be used to do this in two primary ways.
The first is to put data to work in your customer-facing systems. This is the approach recently pioneered by Western Union, which offers an omnichannel approach that tailors and personalizes customer experiences by processing more than 29 transactions a second and integrating all that data into a single platform for statistical modeling and predictive analysis.
Secondly, data can be used to inform strategic decisions. Just as consumers are now more savvy about the way that their data is used, leadership needs to understand that this type of data-based decision-making is the future of the industry.
In other words, banks must be able to move faster to transform their data into intelligent insights and then put those insights into action to improve customer service, connect customers to information and products when and where it’s needed most and protect sensitive data and customer accounts from threats.
The Bottom Line
The data deluge creates challenges for developers and leaders alike. These challenges are only going to grow as our ability to harvest consumer and industry data continues to expand. If we are going to realize the promises of technology-driven banking, we need to improve the way in which data is managed and processed and ultimately improve our ability to extract meaningful insights from that data.
And just as this challenge is not new, neither is the outcome—a better banking experience for the customer.
If you want to learn more about financial services and FinTech development, operations, architecture and leadership through expert-led talks, panel discussions and keynotes, check out FinCon DX—now available on demand!