Open source is a critical element of IT computing. As companies modernize the mainframe and start to add components to mainframe applications that are off the mainframe, they should to be able to build, test and deploy the components of that application that go across the mainframe and different computing platforms in the same manner. Organizations need open source tools to run on the mainframe to be able to drive those processes in a unified manner across all the different components of that application.
Tim Willging, chief architect and strategist, and Peter Fandel, mainframe open source evangelist of Rocket Software, join Mitch Ashley to discuss advancements in the use of open source software with mainframe apps and tools. Tim and Peter challenge some long-held beliefs around mainframes, open source software and the increasing role open source is playing today.
The video is immediately below, followed by the transcript of the conversation. Enjoy!
Mitch Ashley: Hi everyone, I’m very pleased to be joined by a couple of great gentlemen here to talk about some interesting open source, security, mainframe, DevOps, all kinds of good things. So I’m pleased to be joined by Tim Willging who is chief architect and strategist for the mainframe business unit with Rocket Software and Peter Fandel, senior director of product manager mainframe open source. Welcome, guys.
Tim Willging: Hello, thank you.
Peter Fandel: Hello.
Ashley: I think I might have said your name correctly, Tim. Is it Willging? I don’t know if I said that right.
Willging: That’s correct. That’s fine.
Ashley: Okay, good. Would you start? I’ll have you both introduce yourselves. Maybe you can also introduce the company, Tim, and then we’ll have Peter do the same?
Willging: Sure. Yeah, I’m Tim Willging. As Mitch mentioned, I’m the chief architect and strategist for the mainframe business unit at Rocket. I’ve been with Rocket about 15 years and spent the majority of my career developing commercial solutions for the mainframe and mostly around database tools, DB2 tools. Rocket Software has been in existence for 25-plus years and is a privately held company and specifically we do software that’s not just on the mainframe but let me talk mostly about the mainframe software.
Rocket has a development partnership with IBM and many of the solutions we develop are IBM branded; but we also have many things we’re doing Rocket direct; and open source, particularly the subject of this call, is one of those things that we also sell direct or partnership with companies direct to provide that open source. So Rocket has a long history with the mainframe and we believe in its future and in its sort of importance in the computing divisions across many of the largest companies of the world.
Ashley: Excellent. Great. Peter, if you would introduce yourself.
Fandel: Sure. Peter Fandel, again. I’ve been at Rocket for 19 years. The great majority of those years in engineering management, and I took over management of the open source porting team about, oh, four or five years ago and transitioned to product management about a year ago. And I manage both the open source porting portfolio as well as the Zowe portfolio, and both of those portfolios are critical components to the modernization story for the mainframe in large part because of the huge amounts of open source that are out there that you can _____ can get them running well on the OS as well as attracting talent, the next generation of developers. So that’s really our focus area for open source.
Ashley: Excellent. You know, maybe you ought to start – and I started my career in the mainframe, too, and so I understand/kind of get where you’re all coming from in the market. There’s a lot of built up beliefs about mainframes and applications and whether you should think about porting or not or, you know, replatform them, leave them alone. Probably like you, I’ve learned it’s very costly and also fraught with danger to just go rebuilding applications, you know, because you think you want them on a different platform. You have to have a really good reason. But open source is actually a really important part of the mainframe ecosystem. I think maybe folks don’t really realize how important it is. How would you guys kind of set that up to give people an idea for the role that open source is playing in the mainframe environment today?
Willging: Well, I’d start off by saying open source is incredibly important part of IT computing. I mean it’s something early career professionals are learning. It’s a huge way companies are running their business today and rather than writing things from scratch, many are starting with open source and then augmenting or augmenting existing systems with open source to modernize that open source because of its collaborative nature, how different companies and individuals contribute to that open source based on the needs of the industry. You really can’t separate the needs of our organizations that are running mainframes from that open source. Open source is a critical component of running your compute division. So without it, I think the mainframe would be, you know, would have a big missing part in what companies need to run their business today.
Fandel: I think also there’s a huge cost savings associated with that because you take any recent graduate and even not-so-recent graduate of any comp sci program and you ask them to start developing applications, they’re going to expect to build those applications on open source building blocks. And if they’re not there, your cost goes way, way up. As well, another area where open source is a must have is DevOps where if you want to – do you want to have a different DevOps tool chain for the mainframe than the rest of your organization or do you want to unify that? And if you’re going to unify it, it’s got to be open source based.
Ashley: It’s a massive source of innovation and I don’t know if we’re here yet, but if we aren’t, we’re quickly reaching a point where you can’t operate a company, a software team without open source software. And if you aren’t using it, I can promise you your competitors are. So take advantage of it, right? Well, let’s talk a little bit about I know Zowe is of course a big important part of your strategy. Break it down for us. Go into a little bit more detail about where the places, roles that open source does play a role in the software development process and operational systems of mainframes, kind of paint that picture for our folks, our listeners.
Willging: I think there are a lot of different ways it plugs in, but the one that I hear most of the companies I talk to is precisely what Peter just said. You know, the mainframe through many years of existence has been very waterfall-ish I’ve heard some companies because of the reliability. It’s the security, the maintainability of the mainframe that there’s a process which changes are rolled out in large batches. As companies modernize the mainframe and they start to add components to those mainframe applications that are off the mainframe, like in the hybrid cloud or have a web frontend or a mobile app frontend to a backend service or application that’s running on the mainframe, they want to be able to build, test, deploy the components of that application that go across the mainframe and those different computing platforms in the same manner.
And to have tools, pipeline building tools, pipeline organization tools that have tools that build these environments, run the tests, destroy these environments, they need those open source tools to run on the mainframe to be able to drive those processes in a unified manner across all the different components of that application. So many companies today are just starting – you know, mainframe companies are starting on this journey. Many of them are hand rolling over the years have hand rolled sort of tools to try to have them plugged in, but that can get expensive as we’re talking about. So to get that open source running on the mainframe it’s important for them to be able to deploy those DevOps principles and practices in that application management lifecycle process.
Fandel: I would –
Ashley: Very much so. Of course there’s also more with that too. Go ahead. I’m sorry, go ahead, Peter.
Fandel: I was going to build on what Tim said by introducing a really important distinction and that is that there is three very different ways you can get open source onto the mainframe. Most of the open source we’re talking about is units based. And you can do it by a Linux on Z, you can do it by a zCX Docker container or you can do it by UNIX System Services. And it used to be that the low cost way of doing that was Linux on Z and now more recently zCX because the porting is not much work to do because it’s all automated just port to the hardware which is done by compiles on those systems.
But doing it on UNIX System Services is a lot more time consuming if you do it manually. But the advantage if you do port well on UNIX System Services is you’re close to the data. I mean if you’re running open source on Linux on Z or on zCX, it’s akin to running it on another machine on the network. You don’t have direct access to the data. Everything has to pass through TCP/IP. But UNIX System Services is part of z/OS and so you have direct access to all of the important data in your organization that MVS data. And that’s a key distinction. Where Rocket plays is open source on UNIX System Services on z/OS. So we are close to where the data is and we have solved the problem of high cost of porting through this GCC glibc technology where we essentially automate the port of open source by simply compiling and linking in GCC and it injects the z/OS specific changes directly into the binary output of the build.
Ashley: Talk about that process. Is that something that you’ve been doing a while? Of course going and changing binaries, most folks probably don’t do that as a natural first thought. But it does sound like a pretty straightforward way of minimizing the steps that you have to go through to get to begin using this. You can actually do it _____.
Fandel: Yeah, it’s just we’ve been working on this GCC glibc port in this manner for over two years, and it’s just coming to fruition now where in first quarter we will be releasing new versions of several of our ports that are not ported by hand. They are ported simply by building them with GCC, and we hope to have transitioned our entire portfolio by the end of 2021 to be built in this manner. And that means we no longer have to modify the source code because upstreaming is no longer a question, an issue. It means that we can turn around security vulnerability fixes much, much faster than we could otherwise.
Ashley: I don’t mean this in a hyperbolic way or kind of playing up to you in any way, but that kind of an approach can be a pretty big game changer because you’re really asking people, it’s a low lift instead of a heavy lift to port if you will.
Willging: I’ve talked to many through the z/OS community who believe that due to the effort of going through these ports in the past, they felt that a better strategy was to run your open source close the mainframe – I say close with parentheses – or on a partition on the mainframe that’s running zLinux because the porting effort to get open source running on zLinux is minimal. And so they felt that the version currency, the security fix as Peter said, and the breadth of the packages available, open source source packages available on z/OS proper would always be behind. So they felt that was a better strategy to run it on zLinux.
We don’t believe that at Rocket. We believe, as Peter said, that there are technical advantages to running that open source both from a DevOps standpoint but even more so from somebody who is looking to augment a legacy application say via some Python library that they’re looking to deploy where they’ve found some open source with a particular function. Running that Python on z/OS and accessing the data directly in concert with the legacy application that’s running or via some subfunction in that modernization effort is vastly different than running that Python in zLinux or on a machine off-platform and reaching into the data. The latency of what you can do, trying to embed that open source sort of in transaction, in the code in transaction becomes much more difficult to do. So we feel that Z customers as they’re looking to modernize will benefit greatly from open source running on Z proper.
Ashley: And as I understand, I think as you were describing it Peter, really you are talking about that compile process, right? That’s where you’re introducing GCC glibc libraries. So that’s where this process is happening, changing things in a binary to adapt it to be able to use open source. Is that pretty much it or what else do you have to do to verify or test it or what are folks going to want to go through as they start to introduce this into the process?
Fandel: Once we build it, we obviously have to run it through the test suites and we have two forms of testing. We have the built-in test suites that most open source products come with. I mean if you go to Git or Python and you download the source code from the community, it will be the product plus a huge volume of test code, self-test code. And so we execute that test code to make sure that it performs with the same results as on a test UNIX System. I think we used Ubuntu as the comparison and we compare the results. And if there’s any divergence on z/OS, then we inspect those divergences.
And then in addition to that, we have a series of tests that test the z/OS specific capabilities that you want in open source. That’s mostly relating to the ASCII/EBCDIC conversion because z/OS and including UNIX System Service assumes EBCDIC is the default. Most of the data that you’re dealing with is in EBCDIC. And so our philosophy in porting on UNIX System Services, ASCII is the default internally. We compile with ASCII. Communication between open source packages and programs is in ASCII. But when you reach to the operating system, that’s where the conversion has to take place. So we have extra test suites for all of our ports that make sure the ASCII/EBCDIC conversion is taking place correctly both in file IO and pipes. So that adds to our testing process. So it’s not just build it and ship it. We do thoroughly test as well.
Ashley: That’s great. I’m glad you explained that too because there certainly are some fundamental differences in the operating system and environments down to the characters that –
Ashley: – that we’re using. How about are there any kinds of applications that are more easier to port this way? That’s not the right way to say it. It’s not porting it but, you know, understand starting to use open source this way or others that maybe are a little more challenging, you might want to wait to take on that kind of an app? How would you recommend people get started?
Fandel: Well, I mean we found that the porting a language is the most challenging because it just dips into every aspect of the operating system. So our conversion of the Python and Perl ports will be the last ones we release at the end of 2021. Our Python and Perl today are still, the ones we’re releasing are based on manual porting efforts.
Ashley: Okay, good. Any thoughts on that, too, Tim?
Willging: I thought your question originally was like talking about a customer application they were looking to augment and add open source to what might be easier.
Ashley: I would like to go there, too. That actually was my – but I’m glad you answered it the way you did, Peter. I’m curious from a customer perspective data intensive, things that are ultra-high security, high volume, lots of parameters can go into deciding what kind of an application you might use first to go down this path. Any thoughts on what you’d recommend to look at, to pick what app you might try this with?
Willging: I mean there are a lot of mainframe applications that leverage CICS for online transaction processing and CICS is a great environment to deploy open source. It’s been very forward thinking in its development to take and say a CICS application that’s written in COBOL and say you wanted to augment that via some modules that are written in open source language. That’s very possible and to make that communication really between the first thing you think about is there a runtime environment that’s involved, how do I want to lower that runtime environment so it doesn’t load every time that transaction is moving or it’s moving from the legacy code to the modern language?
If there’s not, if it’s just a binary without a runtime environment, then you don’t have those types of concerns. So thinking about that and thinking through how you’re laying that out from an architectural standpoint and making sure you think through that. The speed of transition between legacy and open source language environments, that’s probably the biggest concern depending on what you’d like to do. But many companies have successfully done this and gone off and said, okay, we don’t want to necessarily go and rewrite my 30 million lines of COBOL.
Ashley: In a billing system probably not where I would start.
Willging: But there’s a new function we would like to add that potentially goes and augments via reaching to another system where customer data is available that you certainly can do that and many companies have done that successfully. Again, leveraging more modern languages and having that available for that augmentation is much cheaper, much less risk than is involved in a total rewrite and a re-platform which is again even then much of your language that you’ve written or much of what you’ve written won’t even necessarily run on a new system in the cloud. So replatforming is also very expensive or very risky depending on, again, the number of lines of code.
Anyway, I probably went too far there but that’s sort of how I think about it from an online transaction standpoint. But a batch, different again, that you might just have some data that you want to reprocess and write a new output from that batch and augmenting there with open source is much easier because, again depending on the language can it run in batch, does it require to be hosted within a web server? But even then you can still have a batch process that alerts a web server to now go read this files so you can embed your open source within a say WebSphere Liberty or some web server that can go and even augment batch processes that way. It’s really getting the right architecture down first to say how you want to do that.
Fandel: Another area is data science and machine learning. You have for the past 20 years or so you’ve had academics and scientists and corporate engineers developing machine learning extensions, data science extensions, and they haven’t been writing those extensions in COBOL. They’ve been writing it in Python. And so that’s the go-to language for data science.
Ashley: It does give you access to a lot of new innovation just because of the language as well as the platform –
Ashley: – and software stack differences. Two other areas I’d like to explore in our kind of time remaining, you mentioned DevOps and the role that open source plays in this. How does it help facilitate moving a mainframe application or developing a mainframe application now into more of a DevOps fashion? Are there some things inherent about because it is, you know, libraries that have a compile time process and there’s other tools? There’s Zowe and things like that can help you with interfaces and tool chains, workflows, things like that. What are your thoughts on that topic?
Willging: Go ahead, Peter, you start.
Fandel: Well, I was just going to say that Git, since we release Git I think it’s three years now for z/OS, I think it was our number one download within 30 days of release. Everybody in the world is using Git now –
Ashley: It starts with Git, doesn’t it?
Fandel: – or Source Code Control. And so it, like I said before, just enables you to have a unified DevOps pipeline and that’s just huge.
Willging: Yeah. I mean to add to that, to have all of your source code in Git and then kick off a build process but then thinking of the mainframe and some of its legacy components like the parts of an application that help that make up the database. You know DB2 is very big on the mainframe. DB2 for Z. So to say that you can take those parts of the application touch the tables or smooth the tables in DB2 and extract those and then store those in a source code management repository because in DevOps you really want to store everything as code and represent everything in code so changes can be tracked over time. So taking open sourced, taking some even commercial tooling, taking some open source scripting languages, open source to allow you to interact with a DevOps pipeline that’s running off the mainframe that’s driving the build and reaching in. You know, DevOps is really, again, not a product. It’s a culture and it’s a –
Ashley: It’s a process. It’s a way of creating software, right?
Willging: And so to build that DevOps pipeline using those tools but then have the tools – the DevOps tooling and scripting and running on the mainframe to be able to interact with the mainframe sometimes is a combination of open source, purchased tooling, and then hand rolling the pieces in between to fit your company’s needs. That’s really not possible. Again having a Git client for the mainframe and a lot of the other tools there are really important for that process.
Ashley: I’m curious about the security side of this. You know, a lot of application of air gap requirements things like that or the frequency or timeliness of security fixes. I don’t know if there’s a lot of difference between mainframe environments versus open source. That tends to happen of course more transparently. Maybe what are some of the advantages, pros and cons from a security standpoint of taking this approach?
Fandel: Yeah, I’m glad you mentioned that because that’s one of the major changes that we have put in place this year with the introduction of the Rocket Open AppDev for Z Solution Bundle. We have switched from sort of old style, one at a time download of tarball and FTP it and then run through ten steps to set environment bevels and get it installed for each source tool or language. And as an example, GIT, which has four dependencies meant four or five downloads. Then you FTPed them over, and then you do the ten steps four or five times. It’s like _____ way. We’ve switched to conda as the system for downloading, installing, and deploying open source on z/OS. That has made a huge difference in terms of the user experience and the ownership experience.
So it’s now once you have conda installed on your mainframe, that takes about 30 minutes including the download and the setup, then it’s a single command to install Git and all of its dependency from the command line; and further, conda has this concept of a channel which is their word for a software repository or repository of software you download from. Conda supports something called a file channel as well as internet channels.
So in addition to this, we have set up two internet channels at Rocket, one public one at anaconda.org that anybody in the world can use to download from and a secure channel server on our network that is authenticated for our customers on support. And then additionally you can set up a file channel which is on your premise which is ideal for air gap systems. So if you have an air gap mainframe, you don’t want it to have a connection to the internet, you can set up a file channel, populate it with the full contents of our bundle and then all of the developers within your organization can download at will from your on premise channels. That’s a big advantage in the area of security.
Ashley: Those some really big ones, big changes.
Willging: It is a big change. I mean there are a lot of companies that have said that the only delivery mechanism for software to my mainframe is through IBM Shopz. So you know, I have some feelings on that. I think that’s old school thinking as far as I’m concerned, but there’s still a culture amongst many mainframe customers that that’s it. I think that is changing and to make it simple, to guarantee that it’s secure and to provide options is what Peter just went through I think is important to help increase adoption of open source on the mainframe. And so Rocket is really leading in that area to help make that possible.
Ashley: It is culture. It is sort of tried and true. You know, if it’s not broken don’t fix it. But on the other hand, when you start to demonstrate new capabilities, new benefits, speed improvements, more security, whatever it might be, access to innovation, suddenly that becomes maybe a compelling reason to start to break down some of those traditions or barriers or kind of think about adding this as a capability. You’re not talking about changing everything, right? We’re talking about adding capability to how you create software in a mainframe and expanded environment, correct?
Willging: Yep, exactly.
Ashley: Good, good. Well, where can folks learn more about this? I don’t know if you have any trials or free downloads or you’ve been in beta on some things that are GA or coming out in GA. What’s the best way for folks that want to engage with you and find out more and kind of give this some –
Fandel: I think you can Google z/OS Miniconda. You can Google Rocket Open AppDev for Z which is the product release from September and that will lead you to our product pages, our download page, documentation on how to install z/OS Miniconda which is the bootstrap to get you started. There is a couple of published videos. If you look at the Open Mainframe Projects Summit from September and look for, under the video there, “Demonstrating a Secure System for Downloading and Installing Software on z/OS,” you’ll find a video there that shows how it’s used.
Willging: The only thing I’d add to that is go check out zowe.org.
Fandel: Thank you, Tim, yes, absolutely, zowe.org.
Ashley: Don’t want to look past that, of course.
Willging: It completes everything else Peter said.
Ashley: Very important.
Willging: Important stuff. We haven’t really talked much about Zowe, but yeah.
Ashley: We’ll have a chance to do that. Would love to have you back and we can delve into that. I really would like to explore some more about this delivery of software into production and how do we start to evolve and expand helping more organizations consider how they might do that so there will be some great areas to explore further. So look forward to doing that with you. It’s been a lot of fun talking. Any parting thoughts before we wrap things up here?
Willging: Just real quick, I’d encourage those mainframe z/OS shops, if you don’t have a policy for downloading, consuming open source, I encourage you to get your thought leaders together, you know think about the advantages that open source will provide in your modernization efforts. Check out zowe.org but create a policy to not only to consume but contribute. Get involved in the projects. It’s a way to expand your employees’ interest in the mainframe to make that open source there available. So if you haven’t done it, you really should do that.
Ashley: Access to more talent I think as Peter was mentioning earlier. It really is opening up new pathways as opposed to replatforming and a lot of not very attractive options sometimes that you now have because of open source. So folks definitely should consider this, look at it, get into it, get involved. Great. Well, gentlemen, it’s been great talking with you, both Tim Willging and Peter Fandel from Rocket Software. Take care, gentlemen, we will see you next time.
Willging: Thank you.
Fandel: All right, thank you. Take care. Bye-bye.