BMC, as part of a broader commitment to building generative artificial intelligence (AI) assistants to simply the management of mainframes, has made available in beta a tool that explains code functionality.
John McKenny, senior vice president and general manager for Intelligent Z optimization and transformation at BMC, said BMC Automated Mainframe Intelligence (AMI) DevX Code Insights will, via a chat interface, provide guidance to help debug code written in multiple languages, understand system processes and make better-informed decisions.
Based on large language models (LLMs) that BMC is training, BMC AMI DevX Code Insights is one of several AI agents that BMC plans to make available via a single console. Those LLMs will either be developed by BMC or based on third-party platforms. Alternatively, BMC also plans to enable organizations to invoke any LLMs they decide to build themselves.
BMC is also inviting organizations to participate in a Design Program through which the company will provide access to generative AI features as they are developed. In effect, AI agents will provide the equivalent of a subject matter expert (SME) for each task assigned, added McKenny. That’s critical because for generative AI to be effective, a platform needs to provide more than an ability to use prompts to ask questions, he noted. Instead, an AI agent should be able to surface guidance and insights that help streamline workflows, added McKenny.
While generative AI will play a major role in making it simpler to manage mainframes, they are only one component of an ongoing BMC effort to simplify the management of mainframe platforms for the next 30 years or more, said McKenney. For example, BMC will be enabling developers to self-service their requirements using a service catalog that reduces their dependency on IT operations teams, he noted.
Ultimately, the goal is to make the mainframe just another type of distributed computing platform that requires much less specialized skills to manage. BMC is not the only provider of software for the mainframe moving in that direction, but with the rise of generative AI, the amount of time required to achieve that goal is about to be substantially reduced.
That may prove crucial as the number of AI models being deployed on mainframe platforms where massive amounts of data already exist starts to steadily increase in the months and years ahead. After all, it’s usually easier to bring AI models to where data already exists than it is to move that data onto another platform.
IT teams might also want to revisit what types of workloads are running on what platform in IT environments. While not every IT organization has a mainframe, those that do can lower the total cost of IT by taking advantage of mainframe licenses that are designed to encourage IT teams to consolidate more workloads on the platform.
Regardless of approach, the one clear thing is that in time IT teams will become more homogenous, as in the age of AI it becomes simpler to manage workloads regardless of where they happen to be physically located.