I was reading Mike Vizard’s article about IBM previewing an AI tool to convert COBOL to Java and mentally cheered this move. No, seriously—massively cheered this move.
There are a lot of uses of large language models (LLM) right now, and many of them are not all that exciting, in my opinion. This one, though? This is huge. In the course of my career, I have been involved in three projects to convert COBOL and one to convert FORTRAN, with some projects that required interfacing with RPG III systems. In all of these cases, the costs were prohibitive. Two of the three COBOL projects never made it past estimates because the cost of resources who know both COBOL and Java were prohibitive, as is the complexity of most mainframe applications as they were modified over the years. The Fortran project never got past resourcing, and the RPG III interfaces were just that. We had to pull the data as it was presented by RPG III systems and only act on our copy, with a separate process used to update the core systems. Why? Because modifying the RPG was impractical, but we could be agile. It was … painful, to put it mildly.
What all of these projects shared was the lack of a skilled workforce. And what IBM is offering is an LLM that acts as that workforce. If you get the application into Java, there is a large pool of developers that can then troubleshoot it. While the environment issues will still exist and people skilled in tools like JCL are increasingly hard to find, that part can be rebuilt in the new environment. (One question I have about IBM’s project is, “Can it convert to Java for a Linux LPAR/VM?” I suspect that this iteration converts within z/OS, but it would be cool to see increased options in the future.)
Honestly, this trend will only increase. While IBM’s use of LLM is aimed at moving COBOL to Java, there are an incredible number of lines of Java out there to train on, so soon enough, there will be other conversions in and out of Java. And that’s all good news. We recreate the wheel a lot. The longer-term implication of AI-assisted conversion is that we won’t have to. Think about it: We could feed in a mathematical routine in Fortran and get the language of our choice out the other side.
Way back in the day, a collection of compilers named TopSpeed let you develop in any of several languages, even mixing and matching to create applications that used each language (and developer) for their strengths. This is the TopSpeed dream writ large. (Disclaimer: I was a huge TopSpeed fan. Bought their intro kit and wrote systems mixing C/Pascal/ASM without having to mess with things like parameter sequences. Yeah, I was cool.)
We have had too few developers for a very long time. Now, we are about to have a world where developers can focus on truly unique functionality, and the rest is generated based on the mass of implementations we already have. Think in terms of sorting algorithms—99% of the sorting problems out there were mastered decades ago, and yet we frequently see people re-implementing, even today. Now we can tell a system, “Generate a tree sort,” and just make calls. Every time we implement a new language, this type of thing has to be re-implemented, and often it is re-implemented because we don’t know about the available solutions. In the future, we will have solutions at our fingertips.
And as with the other benefits of automation, we’ll do less busy work and add more business value. While there will be some pain, I think in the long term that is a huge benefit. And you’ll still be there, making things go zoom.