Observability has always been a key factor in software development – not so much on the mainframe. However, the rise of the internet as the go-to platform for modern commerce has caused an explosion of transactional activities. There’s much more complexity behind the scenes than the request/response activity that used to be the usual interaction between a client and a single server. That necessitates ubiquitous observability as a mission-critical necessity for any modern application. As computational activities increase, so, too, does the risk of failure. One way to avoid disaster is to rely upon observation technologies that monitor not only runtime behavior, but design-time behavior, as well.
Analytical Insights are Key to Application Modernization
Modern computing activity goes well beyond simple request/response interactions. Today, one request can traverse any number of computing resources before resolving to a completed response. These resources range from application logic that’s encapsulated and then exposed to the network at a number of endpoints, to a diverse mixture of data storage mechanisms. And, if the application is part of an event-driven architecture in which messages are passed around to a large number of recipients, ancillary logic can be executed in ways that are unknown to the originating request. This can cause significant side-effects, nonetheless.
Given the complexities inherent in modern applications running at web-scale, without ubiquitous observability, developers are flying blind. This is true not only for what we have come to know as commodity computing, in which applications are hosted on racks of X86 servers housed in private and public data centers but also for mainframe computing.
The Emergence of Observation Technologies
Over the last decade, a growing number of observation and monitoring technologies have emerged for network applications and database environments. Open source projects such as OpenTracing, OpenTelemetry, and Prometheus give developers insights into the runtime behavior of applications. Also, there are a plethora of commercial observability products that companies can buy to meet their specific monitoring and observability needs.
However, while the X86 development community has benefited from observability tools for a while, their mainframe counterparts haven’t had it so easy. Commodity developers require nothing more than a click on a web page link to access any number of powerful observability tools, but mainframe developers have been starved for these types of tools.
Applying the Principles of Observability to Modern Mainframe Development
One of the big challenges of mainframe programming is getting a clear picture of application dependencies. Unlike programming under Java, C# or Node.JS, where external dependencies are defined in a central configuration file, or internal dependencies can be discovered with a code analyst tool, mainframe programming makes knowledge of dependencies a difficult undertaking. This type of design-time observation hadn’t really existed until technologies like IBM’s Application Delivery and Discovery Intelligence (ADDI) came along.
An important thing to understand about dependencies in most mainframe computing is that they exist at both the function and executable levels. A developer can create a function, which, in mainframe parlance, is called a routine. That one routine might be used by dozens, if not hundreds, of other routines. Sharing code from a common resource is not unusual in X86 development, where binary libraries get shared all the time. However, in mainframe computing, these shared routines, along with file layouts and resource definitions, are physically copied in with the developer’s code using a technology called a copybook. Then, the code is compiled and linked to create an executable file. Once turned into an executable, the code becomes opaque. As a result, observing what’s going at runtime and design-time within the executable can be a herculean task. For many years, it was.
Fortunately, you can use a tool like ADDI to overcome this obstacle. IBM’s ADDI provides important insights and analysis into the code that makes up a mainframe application. Both developers and system analysts can use ADDI to determine not only application dependencies but also to gather information about the execution environment; for example, scheduler settings and database configuration.
As with X86 development where ancillary application intelligence, well beyond information about the source code, can be displayed in the developer’s integrated development environment, ADDI also makes it possible to report information in the developer’s IDE. Also, systems analysts can use the additional information provided by ADDI, such as the structure and segmentation of an application, to make decisions about how to refactor code when it comes time to make improvements.
Providing a broader scope of observability into design-time activity is but one aspect of enhanced observability in the modern mainframe environment. ADDI also reports test coverage metrics. Knowing what and how much code has been tested is an important part of observability, even more so when used with tools that are focused on mainframe testing. One such tool is Virtual Test Platform (VTP).
Leveraging Observability in the Mainframe Testing Process
Until VTP came along, mainframe developers were flying blind – they had limited visibility into the transactions that their code used and the dependencies that their code relied upon. They had an enormous amount of work to do to figure out transactional behavior throughout the application. They spent a lot of time doing archeological digging in legacy codebases to get a clear idea of what was happening. And, it didn’t get any easier as testing demands on all types of applications, both X86 and mainframe, increased due to the rise in commercial activities on the internet at a global scale.
According to Chris Trobridge, senior offering manager at IBM, “The testing load for developers has been increasing tremendously, and with the introduction of agile development processes, developers just can’t keep up with the testing load that they have.”
VTP addresses many of these testing issues, both in terms of observability and quality assurance. “VTP really is observing what goes on at runtime, but without knowing anything about the application,” Trobridge said.
VTP can monitor an application and capture and record all its interactions. This type of observability enables developers to make modifications and then run those changes against observed behavior. It’s a novel approach to mainframe development in which the benefits of both design-time and runtime observability are leveraged in a synergistic manner. In order to understand how groundbreaking this capability is, you need to remember that only five years ago, while X86 developers were enjoying a maturing set of monitoring and tracing tools; for mainframe developers, this type of observability wasn’t possible. Now it is. It’s a dramatic step forward.
“VTP is a tool that enables developers to automate a lot of the testing, and, in turn, enables the recording of a transaction environment without having to know a lot about the environment,” Trobridge explained. “The recordings themselves can be played back rapidly without compromising security. This is an important aspect of VTP. This level of observability and interactivity really is something that mainframe developers have not had access to before. It’s a transformational technology in the mainframe world.”
Technologies such as ADDI and VTP are a transformational approach to enhancing observability in mainframe environments. In many ways, the focus on exposing the granularity of computing at the routine level is a harbinger of things to come for all developers, in both the X86 and mainframe worlds.
Taking a Function-Centric Approach to Observability
Historically, for commodity developers, observing request/response behavior between services at runtime, along with identifying code quality and complexity issues at design time, allows for identification and remediation of most issues in software development.
For the mainframe developer, request and response behavior is only the tip of the iceberg when analyzing application behavior. In a mainframe environment, one sub-routine might be used by thousands of other functions and subroutines at any given time. Commodity developers have been spared the burden of accommodating the function as the fundamental deployment unit because, until recently, functions were grouped into well-encapsulated deployment units, such as Java’s .jar file, .NET DLLs or, more recently, containers. If something went wrong, the error would generally appear in one of the deployment units.
However, with the current rise of serverless computing, where the function becomes the primary deployment unit, X86 developers are in the same boat as their mainframe counterparts. In the past, for the X86 developer, one application might use a limited number of components that could have dozens, maybe hundreds of functions. Now in the serverless world, all these functions are exposed as standalone units, in which each function is the responsibility of a distinct team. We’ve gone from an ecosystem that has a limited number of deployment units to an ecosystem that can contain thousands. Fixing bugs can require hours of poring over logs and examining minute details just to determine the root cause of an issue.
As mentioned above, this depth of dependency at the function level is something that mainframe developers have been dealing with for years. Fortunately, tools such as ADDI and VTP provide a high degree of observability for very complex mainframe applications at both runtime and design-time.
It’s strange to say, but the large, function-based ecosystem issues that mainframe developers are solving today will become a growing concern for X86 developers in the very near future. In many ways, tools such as ADDI and VTP are setting an example for X86 tool development moving forward. The mantra, “You can’t fix what you can’t see” is as true today as it has ever been. That the mainframe community is at the vanguard of creating new ways to work in massive computing environments is a testament to the power of rejuvenation in the most unexpected ways.