The architectures I like best are the ones that are built to scale. I can’t tell you how many time I’ve run across applications that have hit the wall when the request count goes up from hundreds to thousands of simultaneous users. The next thing you know the CPUs start to peg, memory get maxed or overall availability just magically goes away. All that’s left is the dreaded status code: 500 Internal Server Error.
Those of us who have made distributed computing a way of life spend a lot of time making sure that our applications scale up. We’ve found the pursuit to be costly. As result, some of us have turned the entire problem over to managed service providers. Paying a bill at the end of the month to ensure 24/7 availability is a whole lot better than taking a call at 3 a.m. demanding to know why the inventory system in Toledo has gone down. Whether we choose to take care of our own metal or use an MSP, we understand that an architecture that cannot scale has little value.
Surprisingly, one of the best architectures around is one that is used by 90 percent of the world’s largest airlines, 70 percent of big retailers, most of the banks worldwide and all of the largest insurance companies. Where is this architecture to be found? In mainframe computing. Yep, you got it: Big Iron.
Big Iron: The Gift that Keeps on Giving
Mainframe computing in general and IBM Z in particular have a computing architecture that is built to support usage by thousands, if not millions, of clients. Since their inception, these systems have been designed to provide high-availability, large-scale, multi-processor computing in a cost-effective manner. Whereas a typical data center filled with racks of commodity hardware will suck thousands of dollars a month in electricity, a single IBM z System that processes 2.5 billion transactions a day uses about as much electricity as the clothes dryer down in your laundry room. (2.5 billion daily transaction is the equivalent of 100 cyber Mondays every day of the year!)
In terms of high availability, mainframe uptime is measured in years. There are systems out there that have been running continuously for 10 years and are still up today. This is no small accomplishment. Think about it: Imagine what it takes to have enough redundancy built into an architecture to never have to reboot, no matter how weird any of the applications get. Memory and application isolation have been built into the technology since the start. In a way, we’re talking cloud before the cloud.
Memory isolation and application redundancy have been key features of mainframe technology since inception
It’s an amazing technology, one that more developers, particularly those of us working on the forefront of application development should leverage. The question becomes how to benefit from its capabilities and drive business value from this asset with its treasure trove of data. The first step is to understand the difference between a system of interaction and a business critical system.
Understanding Systems of Interaction and Business Critical Systems
In the world of modern, distributed computing, application logic can be represented in a variety of ways. For example, it’s quite possible to check on the status of flight information by using a mobile app, going to an airline’s website or by calling the carrier’s toll-free number. The logic needed to find and deliver flight information is in a central location; the means to access that information is distributed. There are two systems in play—one is called the system of interaction and the other is the business critical system. The system of interaction is the system that interacts with the end user, collecting data to send as output. The business critical system is the system that serves as the repository of common application intelligence. The concept is akin to the thinking behind client-server architecture, but it’s a bit more in that it’s system based. Think of a phone system, cable TV system or online games as a system of interaction. Think of a rack of servers, the cloud or a mainframe as a business critical system. The overall digital experience is the aggregation of its components.
This may seem like an obvious separation, but there’s a bit more to it, particularly when considering large hardware such as IBM Z and the like. Being a business critical system is more than being a simple data store. A business critical system can contain all the data necessary to perform complex machine learning activity upon a particular dataset submitted to it, weather prediction or medical diagnosis, for example. Of course, the business critical system has data, but the system also has algorithmic and parallel processing capability far beyond a standalone PC or cell phone, the components that are typical in a system of interaction.
Business critical systems, particularly those that have been in service for decades, become very powerful when they support common standards of digital interaction—APIs based on REST, for example. Back when distributed computing was limited to a small part of the population, a proprietary approach to system interaction was acceptable. But, the rise of the internet changed it all. Today we have billions of people online, on some part of a global network. Thus, common standards such as HTTP and REST have emerged to make interacting with software systems worldwide easier. Companies such as IBM understand the importance of supporting the ways of the internet. Thus, a product such as z/OS Connect EE allows developers to connect to a Z System installation and design an API around the code in that installation. Using z/OS a developer can wrap a REST API around some COBOL code. Having this type of capability opens up a world of possibilities.
Going to the Next Level
As the world of mainframes gets more attention from those making mobile apps and aggregating services into the cloud, a whole new level of software development emerges. Using a product such as z/OS Connect EE to create REST APIs combines the world of Big Metal with the world of big data. Those of us in DevOps understand that breaking down silos and fostering collaboration is what our work is about. Bringing together the skills and experience of all those who are dedicated to making quality software, from COBOL programmer to engineers who embrace infrastructure as code, is essential in the world of modern computing. The more people and systems we have working together effectively, the better will be both our software and the world we make.