Mainframes are historically clunky systems. They are incredibly secure and can process impressive calculations, yet they are complex to manage and challenging to extract data from. This reality is at odds with modern cloud-native infrastructure, which emphasizes distributed computing and portable microservices. So, how can existing mainframes adapt to this connected, data-driven economy?
I recently met with Dr. Alex Heublein, president at Adaptigent, to talk about the state of legacy modernization. According to Heublein, many mainframes are here to stay for the foreseeable future. However, enterprises are continuing to get value from these systems by adding an integration layer on top of them. By using a REST API layer on top of a mainframe, enterprises could finally open up these systems. This could facilitate bidirectional communication with newer cloud-native services, enable phase migration and edge closer to real-time processing.
State of Mainframe Modernization
The software industry changes every day. “Nothing in the tech field is permanent,” said Heublein, yet some legacy technologies will persist into the foreseeable future. In terms of mainframe adoption, Heublein sees a couple of categories of users. First are the large financial services organizations, such as large banks and insurance companies, which are considerably invested in their mainframe architectures and which will likely continue to rely on mainframe environments for their policy management and core infrastructure. “I think there’s a lot of good reasons for that,” explained Heublein. “If you want to process billions of transactions a day, [mainframe] is still a good option.”
That said, regulations and market pressures are influencing a global movement toward open banking. For example, PSD2 mandates open banking in Europe and industry consortiums like Financial Data Exchange (FDX) are directing it in the U.S. market. These initiatives are encouraging financial services to transform their legacy environments and spurring cloud migration, especially for applications that aren’t mission-critical.
Heublein pointed out a few different patterns in enterprise mainframe modernization. One approach is that enterprises are rewriting or re-architecting old COBOL applications and keeping them on-premises. Or they are porting and re-platforming the application; moving a COBOL app to a Windows or Linux system, for example. Finally, some are shifting straight to the cloud. For solutions that don’t need to be on the mainframe, enterprises may shift toward cloud-native to free up capacity.
APIs Enable Legacy Modernization
Distributed cloud technology is helping businesses scale to the right level. Yet, it’s difficult to jump off the mainframe entirely, noted Heublein. “Very rarely can you take all your stuff and move it all at once,” he said. “It’s a tough pill to swallow.” It’s especially difficult to move core banking infrastructure to a distributed environment—an instant shift could introduce security risks. For this reason, he recommended a ‘phase migration’ approach in which individual components are slowly migrated. For example, an insurance provider may shift their claims engine, policy engine and ratings engine to the cloud one at a time.
As a result of phase migration, however, engineers must simultaneously support both on-premises and cloud-based applications. They must also ensure bidirectional communication so that these systems can exchange data. This is where a REST API transactional layer is necessary, Heublein explained. Such a layer could enable bidirectional communication between the mainframe and cloud-native applications, abstracting complexity when calling out to the modern world, or intercepting cloud data and filtering it into a format mainframe technologies can process.
To increase developer usability, abstracting complexity from the mainframe integration process will be the key to building out connectivity. This could involve a codeless development approach with a COBOL sub-routine to call external systems, Heublein said. Automatic restructuring of data will be necessary to enable fluent bidirectional communication, too.
Four Business Drivers of Mainframe Integration
So what is the end result of putting an integration layer on top of mainframe systems? According to Heublein, there are four key business benefits of using such an abstraction layer:
- Orchestration: Software ecosystems often need to make multiple calls to different applications to collect and transform data. Yet, mainframe environments are very complex and challenging to integrate with. More usable integration fabric for mainframe systems could enable better orchestration of workflows in a developer-friendly manner. By using a REST interface, all apps could be calling the same interface, thus greatly simplifying orchestration between disparate services.
- Sustained Adaptability: The world is rapidly accelerating, and enterprises need to maintain and modify these APIs very quickly. Organizations require “sustained adaptability” to adapt and survive in the digital world, said Heublein. Mainframe systems, however, were not designed for the 21st century. Thus, low-code integration layers are critical to empowering rapid development.
- Data Transformation: Data is powerful for the business. Data is not only a commodity in and of itself but also is required for AI and machine learning. However, mainframes have a completely different way of encoding data. Thus, there is a need to transform this data into a modern format that businesses can consume. An integration layer between these systems can deliver an improved data consumption model.
- Real-Time Processing: You may think online payments happen in real-time; however, payment apps like Venmo are really just an abstraction layer over bank transactions, which may clear as little as twice a day. In short, mainframes are not great at real-time processing. However, more usable integrations to this infrastructure could enable user-facing applications to interact more closely to real-time.
Another concern is security. Opening up mainframe environments that house core financial infrastructure must be done with extreme care—especially as API attacks remain the number-one rising threat in the software landscape. All organizations building out new integrations will require a secure layer from an outbound standpoint. Heublein said this often involves placing an API management layer or gateway in front of the integration layer. This system enables authorization and functions like a firewall between the internal and external world. It can also enable logging and prevent denial of service attacks, he explained.
Improving Agility for Legacy Systems
Legacy systems are entrenched in older technology and carry lots of technical debt, forcing operations to move at a slower pace. Maintainers are also culturally a bit different than the newer breed of cloud-native engineers. “Mainframe people are not … fast,” said Heublein.
DevOps folks want to build, test and promote projects to production very quickly and, arguably, this agility is necessary to satisfy rising digital innovation demands. Improving integration capabilities for on-premises systems could enable businesses to better utilize preexisting investments while also improving agility.
Low-code/no-code capabilities could also increase collaboration potential, Heublein added, which could bridge the gap between on-premises engineers and fast-moving business folks. When adopting low-code/no-code tools for mainframe integration, he cautioned against general-purpose citizen developer frameworks. Whereas universal integration layers can work well for many situations, he stressed that a very specific boundary box is best for building APIs for mission-critical applications.
How is your organization approaching modernization for legacy systems? Let us know in the comments below!