The Linux Foundation Projects
Skip to main content
I Am A Mainframer

I am a Mainframer: Rick Barlow

By | December 16, 2020

In today’s episode of the “I Am A Mainframer” podcast, Steven Dickens sits down with Rick Barlow, Senior z/VM Specialist, Velocity Software Inc. On this podcast, Rick discusses his journey with the mainframe,  advice for those just starting their journey with the Mainframe, and where he sees the Mainframe going in the future.

Listen now on Anchor!

Steven Dickens:  Hello and welcome. My name is Steven Dickens, and you’ve joined us on the “I Am A Mainframer” podcast, brought to you by the Open Mainframe Project. The Open Mainframe Project is a Linux Foundation collaborative project, and it’s the home for open source on the mainframe. I’m joined today by Rick Barlow, who’s going to have some fun with us on the show. Welcome, Rick.

Rick Barlow:  Thanks a lot, Steven. It’s great to be here.

Steven Dickens: So, Rick, I read your bio. It sounds like you’ve got an interest in the background on the platform. Maybe you can introduce yourself to the listeners.

Rick Barlow:  Sure. I’d love to do that. I have a fairly long career, and almost all of it is doing mainframes. I started right fresh out of college and got a chance early on in my career to get my hands on a VM, and VM, of course, is virtualization on the mainframe. And of course, we’re the trailblazers in all of that world; it’s the original client-server before client-server was even a thing. Because VM takes advantage of the mainframe’s ability to schedule multiple workloads basically, seemingly in parallel, obviously they have to timeshare, but it got its birth out of the desire for interactive use of the mainframes. And along the years, I’ve had the opportunity to do all kinds of great stuff hands-on. I was an early user of the first mainframe that the customer actually had to do some of the work to set up the machine, the 4341. Through the years, I’ve had all kinds of fun doing new development on the mainframe, along with the IBM teams; the company I worked for took advantage of ESP programs, mostly in the software arena, but also some in the hardware arena.

So I’ve had lots of chances to play with some of the new things before the rest of the world. In terms of hardware, I love to talk about the hardware; it’s a fascinating machine.

Steven Dickens: You are going to be a kindred spirit, I can tell Rick. You should meet me after a few drinks. I wax lyrical about this platform. So we’re going to be fast friends, I think.

Rick Barlow:  The 4341, they used to call it the laundromat because it looked about the size of two washers and a dryer. And it had nothing on top of it except a single 3270 terminal that was the interface to the machine itself. Through the years, they got really big. The last water-cooled machine would have taken up; I think it was about 20 by 20-floor tiles, two-foot floor tiles in a data center, great big H shape thing. And that had a whole nine processors per side. And if you take that and you consider how big that thing is, and just this last, about six months ago, we put in a brand new z15 single-frame machine, and that thing has 30 processors in it. And it sits in the size of a half of a rack. So the amount of processing when I started working with mainframes, we had three mainframe systems with the equivalent of about 11 MIPS on the floor. And now the z15 is about 1500 MIPS per single core. I mean, that’s just, I can’t even hardly relate how much growth that is over the years,

I’ve had some really fun times doing hands-on with machines; of course, the Z is for nearly zero downtime, and I say nearly because the company I worked for, we found more than our fair share of those tiny little opportunities for things that didn’t work quite as they were supposed to. But also, in that time, I had a chance to work with our hardware representatives on a number of repairs, real-time repairs with production workload on the machine. And that is a phenomenal capability of the Z hardware.

Steven Dickens:  So Rick, who are you with now? Just so that listeners can get orientated, who’s paying the paycheck.

Rick Barlow:   That’s a good question. Now I work for Velocity Software. We have world-class performance monitoring and measuring tools for z/VM, and we also collect z/VSE data. And recently, this past year, z/OS data, that’s just rolling out in our current version of the software, but we also have a big focus now on cloud provisioning. So we have a tool that you can install in just a couple of hours that will allow you then to clone and manage fans of Linux servers, z/VM guests, and z/OS guests.

Steven Dickens: Well, that’s interesting, and maybe I can educate myself and some of the listeners here; it’s interesting that you mentioned cloud and deployment space. That’s obviously a new and emerging space, both on the mainframe platform and on distributed how you manage those with Ansible, OpenStack, all of that open source stack, if you will, to manage these environments. Maybe you can just walk the listeners through that in a little bit more detail and give the guys at Velocity a chance to pitch their products free of charge, but I think that’ll be interesting for the listeners to hear how that stuff works.

Rick Barlow: Sure. I’ll be happy to. The Velocity tools are not open source, unfortunately, but they’re all written in a standard development toolset that comes with z/VM. So there’s a lot of REXX code in there. There’s a lot of CMS Pipelines involved; if you’re not familiar with CMS Pipelines, if you know what pipelines are in the Unix and Linux space, consider CMS Pipelines, a superset of that. The product also exists for z/OS in z/OS Batch Pipes – a really highly powerful, essentially modular programming that you do just by stringing a number of pipeline stages together. And they can go from the simplest thing of just basically executing a command and capturing the output to complex data manipulation and management of all kinds of things. You can collect the various system resources using a pipeline. There are hundreds of ways you can use them. It’s a wonderful tool, and then some.

Steven Dickens:  What are you seeing; Rick, how are customers tending to use those tools? We’re seeing Linux exploding on the platform right now, and that’s where I fit, but really interested to see some of your perspectives of what you’ve seen out there as you talk to clients.

Rick Barlow: Sure, from the VM side of things, the VM hypervisor is really the ability to run hundreds or thousands of Linux images in a single LPAR. And the advantage to that is that you can treat them all with pretty simplistic automation that all lives within the same platform. So if you can monitor and manage all from the same hypervisor, now you’ve cut all kinds of layers of complexity out of your environment. So if you consider a typical multi-platform application development, you have lots of things to deal with in terms of network interaction, various kinds of hardware, often multiple sites, all of those things make your environment more complex, which in reality makes it more expensive to operate overall. If you look at a Linux on Z deployment using any of the tools that are available, either are a provisioning tool. IBM still offers Wave, which is another VM based tool. Various people have tried others open source interactions, each one of those other tools adds a little bit more complexity to the world and makes it harder to implement and manage.

We like to say that ours is probably the simplest set of tools because it all works under a single installer and a single management framework for all of our tools, performance, capacity, planning, monitoring, management, it all works together.

Steven Dickens: So Rick, the first week of October, we celebrated 20 years of Linux on the mainframe platform. And maybe you can give us your perspective on that 20 years and how you’ve seen things develop, that’ll be interesting for the listeners.

Rick Barlow:  Sure. That’s a really great story. For those who were around when the turn of the millennium occurred, there was a lot of churn in the IT industry in those last four or five years, as we all thought terrible things were going to happen. I actually happened to be scheduled on shift for the first group of people that would be there over the change of midnight on the millennium. And it turned out to be a complete non-event, which is interesting. But the only reason I say that is at the same time, in 1999 is when there were the first experimentations with Linux on mainframes. And in the background, we were doing a little bit of that and then as soon as the millennium turned and everybody got over the potential risks, it became really popular. And so we were among early adopters at my previous company who took advantage of it. And it took a couple of times, I won’t call them false starts, but a couple of times trying to convince people that this was really a viable thing to use in an enterprise level, IT environment.

Once we finally got going in 2004, we started using it at my former company. And within six months we had a hundred production applications running on Linux on Z. And we grew from a very small implementation on a couple of spare processors on an existing machine. And within five years we had two standalone machines in two different data centers. And at peak time we had about 1200 Linux guests and nearly 4,000, I’ll call them application instances. So that might be a Java virtual machine or a web instance, or a database instance, all those kinds of things. So the growth was explosive in those first five to six years. And that was, like I said, early in the adoption by most industries. And you’ll see lots and lots of it, now some of the biggest Oracle implementations in the world run on Linux on Z. If you want to run hundreds or thousands of them, the best way is in z/VM because it gives you a lot of automation flexibility. As some of those tools, I talked about, make it really easy to manage a whole bunch of Linux instances.

If you need the really big ones, like many of those huge Oracle instances, some of those run in LPAR because they need a single, very large, more of a monolithic environment than lots of parallel servers.

Steven Dickens: How are you seeing that develop? I mean, I’ve got a perspective working it from the IBM side, we went to a bunch of new clients in new parts of the world with this technology stack. But what’s your perspective? Where do you see this going and where do you see the developments that people should be aware of?

Rick Barlow:  Well, I think it has a huge opportunity in terms of taking advantage of the mainframe characteristics. So the security, the built-in securities, the built-in compression in the latest versions of the machine. If you have a customer data of any sort, whether that’s the things that fall under the various designations, PII, and HIPAA, and all those things where you have to protect the data, taking advantage of running open source tools on the Z platform, where you can use all of those things built into your environment is, in my opinion, the best place to run any kind of a cloud. It’s the one place you know that you still can retain some control of where it goes. If you go into the public cloud, there are some really good advantages there, there’s some really quick turnaround, there is some savings in terms of managing your own infrastructure. But if you think overall of what it does to your application design and the complexity, for example, problem determination. When you have a multi location, multi-platform application, you can run into a lot of trouble trying to figure out what goes wrong.

If you’re doing, for example, Linux on Z deployment, you can have a lot of applications that never have to go outside the box. If you’re using z/OS for your primary data, which is a great thing to do, you can use the internal networking of the machine and all of those things reduce the overall complexity. If you don’t have to have the bandwidth to run all of your network outside the box to get to something else, you move things at machine memory speeds, instead of network speeds. Those are all advantages to doing a cloud on Z.

Steven Dickens: I 100% agree with you there, but do you see that resistance within the clients, obviously this push to cloud, are you seeing that get harder? Or are you seeing that those benefits are still going in the favor of the platform?

Rick Barlow:  I think there is a lot more push to the public side of things. I think there are a lot of them, it appears to me from my perspective, and my experience that there are a lot of people who don’t have a comfort level with what the mainframes can do for them. So we’re a couple of generations now into leadership who didn’t have any computing background before OpenSource. Most people that are trying to lead have come up from using PCs their entire lives. People like me who have been around as long as I have, there were no PCs when I was doing computing. I tell my kids and they laugh because my first programming was still on cards. I did application programming in the beginning, we had to punch cards, terminals were new, about four or five years into my career. And PCs came along about three or four years into my career.

And I was an early player with PCs, I had one of the first dial-up connectivity to the systems I was helping manage way back in the early 1980s. And nobody did that, everybody got up and drove into the data center in the middle of the night when problems occurred. And I had a non IBM PC that I used a dial up connection into the mainframes to help support my early VM systems.

Steven Dickens: Great perspective here, and an ability to look back, what would be that advice you’d give to yourself as you were leaving college, the 22 year old, Rick? What would you be saying to yourself if you had the opportunity to go back and give some advice and counsel? Looking back on your career, what you do now, what would you say to your younger self?

Rick Barlow:  Well, it’s interesting you would ask that because I’ve actually had an opportunity to chat with college students who come to the Tech U and to the SHARE conferences. I tell them, if you want to be pretty much guaranteed, you can have an IT career, learning mainframes is a good thing to do. And I would tell them, I think the most important thing you can do in doing any kind of computing is learn about the fundamental building blocks. Too many people I think we put in front of a PC or in front of a terminal, and we say point and click and drag and drop and that’s the way they learn about computers. When we learned 40 years ago, we learned more about the processor hardware, we learned about the interactions of the various components. And the reason I think that’s important is because understanding those fundamentals helps you better understand the potential impact of your piece of the application on all of that infrastructure, which then of course, makes it easier to be a better programmer overall, I think.

Steven Dickens:  Yeah, it was interesting, I was chatting to one of our recent hires about Recovery Time and Recovery Point Objectives and newsletters. I’ve not heard those three phrases before. And it was interesting, he had a really good knowledge of the product, but he didn’t understand the context of where it sat in the overall architecture. So we were having a call with some of our services guys, and he was pinging me in the background, what’s RTO and RPO? And I’m coming like, yes, you know the product really well, you’ve got that deep knowledge of what our product does, but you don’t understand the context as you say, and the building blocks of where that one piece fits in the overall architecture. So it’s interesting that you mentioned now, I couldn’t agree more. And I think that broader view is something we should be coaching all of the more junior hires to get a view.

Rick Barlow:  Sure. Another factor that so few people know much about is you mentioned disaster recovery, availability of an application has multiple facets. The applications have to be able to deal with bad input before they create bad output. They have to be able to deal with, especially in a multi-platform application, if you send out a communication, you don’t get an answer, what are you going to do? You don’t want things just go belly up. So if you’re on a web on your telephone and you go to do a banking transaction, and if you just get no answer, you’re going to be really frustrated. So those things in terms of the whole disaster recovery and the way your whole environment fits together are extremely important. And the more you can learn about some basic infrastructure, the easier that will be.

Steven Dickens: So Rick, as we’ve covered some good ground, I suppose, looking backwards, we started 40 years ago with your first, some of the time on the platform and the early punch card days, then we got a bit closer term and talked about some of the stuff you were doing on Linux. As you look ahead, imagine I give you a crystal ball, where do you see the platform going, say three to five years out? If you were able to look into our collective futures, where would you see things going?

Rick Barlow:  Wow, what a tough question. I know where I’d like to see us. I think the pendulum needs to swing a little closer to having a bigger focus on mainframes than where it is. I think the whole cloud, and cloud’s an interesting term because depending on who you’re talking to, cloud means something different. In my mind, the cloud is the ability to take not necessarily tightly linked components and attach them in. So I think of it more in terms of API type application development, where your core business stays on a mainframe, where it probably should be. If you’re talking about a million transactions a day in some application, there’s nothing else that’s going to scale that way, but you want to make it easy to plug in whatever the most popular new interface tool is.

Today we’re doing a lot with cell phones and distributed tools like that, but those application tools are changing maybe yearly sometimes, maybe perhaps more than that. So our mainframe applications need to be enabled for that. And I think in a perfect world, we should see more mainframe as the core still owning it, but the whole agile world in trying to be able to quickly develop the interfaces and the changes needed in the interfaces is probably where we’re going to be a hybrid environment I think is where we’ll see more and more of that.

Steven Dickens: So you see, if we look ahead, more connectivity, more applications connected to the mainframe, that would be a short way of summarizing it do you think?

Rick Barlow: That’s a pretty good guess, yeah. Essentially if you leave the core application where it is and you add the human interfaces part will be connected into them.

Steven Dickens: So Rick, that’s been really useful there. I think really interesting for me and really interesting for the listeners, what’s that last, as we maybe start to think of wrapping up, what’s that cool project, that thing you’re working on at the moment, maybe don’t share the client names, but just give us some view of some of the cool things you working on right now?

Rick Barlow: Well, I’m not. I don’t have all the cool jobs, I do more of the infrastructure and making it so the systems are always available, but really the provisioning and the cloud management is really where it’s at. We have to be able to be responsive, somebody wants to try and develop a new interface to an existing application. You want to be able to spin up a server fast and deal and not make the customer figure it all out. Too many places in cloud today, the application developers had to become their own infrastructure experts. They have to deal with security, they have to deal with their connectivity. We don’t want them to have to do that. We want to be able to stand up those servers for them, let them do what they do best. And that is to design and implement the applications. And if things change, you just discard that distributed server, build up some more, depending on how the workloads move around.

Steven Dickens: So, Rick, that’s been a fantastic time here. I’ve certainly learned a lot. I hope our listeners have really enjoyed the conversation as much as I have. Is there anything that you’d want to leave us with as we look to wrap up?

Rick Barlow: Well, I just encourage people who work with mainframes not to be discouraged. I think it’s a great place to work. The technology is amazing and the more you know about the machine, the more amazed I am day by day, as I look at the machines and see where they’re going, the capabilities. I used to think I was a pretty smart guy until I got to see more about what the machines I’m working with do and realized somebody out there in the development labs has gone way ahead of where I ever could be and is coming up with new ideas. We come out with a new machine every couple, IBM comes out every couple of three years at the most. And then usually, an iteration in between of a major enhancement, of course, that’s all going agile now. And so we see continuous delivery on those enhancements as well. Daily, I’m amazed by what’s becoming available.

Steven Dickens: And that’s a great way I think, to wrap up our chat today, Rick. So thank you very much for your time. My name’s Steven Dickens, you’ve been listening to the “I Am A Mainframer” podcast, brought to you by the Open Mainframe Project. If you’ve liked what you’ve heard today, please click and subscribe and we look forward to speaking to you on future episodes. Thanks so much, Rick we’ll speak soon.