I am a Mainframer: Ross Mauri

By September 1, 2020Blog, I Am A Mainframer

In today’s episode of the “I Am A Mainframer” podcast, Steven Dickens sits down with Ross Mauri, General Manager, IBM Z. On this podcast, Ross discusses his journey with the mainframe, confidential computing, advice for those just starting their journey with the Mainframe, and where he sees the Mainframe going in the future.

 

Steven Dickens: Hello and welcome. My name’s Steven Dickens and you’re joining us on the Open Mainframe Project’s I’m A Mainframer Podcast. The Open Mainframe Project is a Linux Foundation collaborative project designed to be a focal point for open source on the mainframe architecture. I’m joined today by one of the biggest names in the mainframe space, Ross Mauri, who’s the general manager of IBM Z and LinuxONE at IBM. Nice to see you, Ross, good to talk. Welcome to the show.

Ross Mauri: Thanks Steven, it’s great to be here. You know I love open source and I love mainframes, and I couldn’t have thought of two better topics, so looking forward to it.

Steven Dickens: So Ross, one thing we always do, and it almost sounds weird to get you to introduce yourself to the mainframe community because I’m sure most of them know who you are, but just really, if you can just give us a little bit about your role at IBM, what you do, and just a sort of introduction to yourself, so we can get the listeners orientated.

Ross Mauri: Sure. I’ll start with the day I joined IBM permanently, which I joined to write operating system code in assembly language, and that was my career goal, and I joined in the MVS operating system, so it was really great. I met my career goal on day one, 42 years ago. But, over time, I’ve done a lot of programming and then I went into management, and I’ve been very, very fortunate to hold a number of really great management and leadership positions. And right now, for the last six years, I’m the general manager of IBM’s Z and LinuxONE, as you said, and that is really, I have end-to-end responsibility for that business globally for IBM, the financials, of course, quarterly and yearly, but the strategy, the technology roadmap, the engineering support, marketing sales, all of that, in the end, comes to me because I’m responsible for the top line and bottom line of this business.

I have to say, it’s been a great six years so far. We’ve changed a lot of things in IBM Z and LinuxONE, well, we created LinuxONE in the last six years, and I’m really happy to be here. It’s like I worked for like 35, 36 years preparing myself to do this job and I’m really loving it.

Steven Dickens: Fantastic. So, Ross does some things that I know, because I’ve heard you over the years present, and we’ve worked together on the Linux side over the last few years, so I’ve got some background, but love to pull through your association with open source, you mentioned it in the introduction. You’ve got a particularly interesting background in that space, I think for an IBMer and a Mainframe person, so if maybe you could just give us a view on your role.

Ross Mauri: Being what I call a programmer by trade, I was always interested in software and when open source really took off, I read about it, I didn’t participate, I read about it. But then, in 1998, one of the distinguished engineers from the lab in Germany came to me and said, “Guess what, Ross? We’ve got Linux up and running on the mainframe, but we don’t want to tell anybody because we’re going to get in trouble.” and I said, “Well, why are you telling me?” and he said, “Because we want you to tell everyone.” and I said, “All right.” So, I dug in, I knew a bit about Linux, but again, no hands-on experience. I dug in with him and I saw that we really had something, I thought, that could be special there.

It’s one of those things where you are thinking about the future, but if you think back in ’98, ’99, Linux and computing was a whole different world back then. But, I saw the possibility and I love the machine-dependent layer of Linux and things like that. It really just struck a chord with me, so I championed it through the business, and including some interesting licenses and things like that, not only was IBM afraid of them, a lot of companies were, but I championed it and we publicly announced Linux on the mainframe. We got Marist College to host the Z architecture machine-dependent portion of the kernel. They still host it today. So, that was a fun beginning. But, then things really started to take off with Linux and IBM finally decided to support Linux as a company and be very broad about it.

When Sam Palmisano and I made that announcement, I got a call the next day from Irving Wladawsky-Berger and he said, “Ross, we’re going to start a Linux unit and we want you to run it.” and I was like, “I’m all in.” So, that was a lot of fun. And I hired Dan Fry, who was the first person I hired, and he started the IBM Linux technology center, which really made tremendous contributions to Linux from an open source point of view. Along that path, there was discussion amongst a number of industry players, HP, Intel, Fujitsu, many others, that we needed a home, if you would, not one company, but we needed a home for Linux, and could we start something where we all contributed and created a not-for-profit? So again, reflecting back, the funny thing is that no one in the industry wanted to call this new entity Linux. They didn’t want that in the title.

We called it the Open Source Development Lab. And IBM was one of the founding partners, I was the elected as chairman of the board of this new not-for-profit, and I was chairman of the board for four years, then I stepped off and allowed Dan Fry actually to come on and take my place because that was the right thing to do. Again, Intel and a lot of others were there from day one. I love, love, love seeing what the good old OSDL has grown up to, it’s grown up to the Linux Foundation, which is absolutely, I think, an essential not-for-profit and place for open source projects. The Linux Foundation does so many good things on so many dimensions, but I’m really happy that I was part of its roots and that today the Open Mainframe Project is obviously flourishing there.

Steven Dickens:  So, Ross, this show’s going to air early September and that’s going to be a key date for us as we look back at that 20 years. Maybe just give us your own perspective over that 20 year period. You talked about LinuxONE being launched, that’s five years on Monday, we’re recording this on the 14th of August, 17th is the fifth anniversary. Just give us your own flavor of that journey over the last 20 plus years.

Ross Mauri: The first five years where, I would say, really experimenting, working with clients on proof of concept, proof of technology, finding out things that we needed to go after, within the Linux kernel for scalability, RAS, security, whatever, and trying to figure out what workloads would really fit best on a mainframe. I’d say then, the next 10 years, so I’ll go from the first five years to years five to 15, was really an amazing expansion of Linux within the mainframe footprint, globally. A lot of server consolidation, database consolidation, went on in that time, but, also there was a lot of open source that wasn’t supported on the platform, on Z. Also, I think we’re kind of limited, but great success for the limits that we had, great success, as you know, more than half of the mainframe clients in the world today also run Linux on Z.

But, then it’s these last five years where we’ve really changed the game, and I’m really happy that I’ve been part of that, it’s been a lot of fun and it’s scratching an itch that I always wanted to scratch. I knew that having a Linux only mainframe, the LinuxONE would be a good idea. I knew that we had to really, really bring more open source packages, runtimes, management frameworks, NoSQL databases, SQL databases, you name it, we needed to have a lot more open source on the platform, and we’ve done that over the last five years, there’s so much now available and supported. I also knew that we needed to now take this and move from server consolidation into real new workloads, so things like blockchain and confidential computing, and there’s a lot of workloads now that no one would have ever expected would run on a mainframe, but now not only do they run here, but I would argue that they run the best. If you care about performance and security, then this is your best home. If you don’t care about performance and security, then there are other homes as well. It’s been a great journey. The last five, we’ve really stepped on the accelerator though. Again, our expansion across the globe with Linux on Z or LinuxONE has just been phenomenal.

Steven Dickens: It’s interesting, Ross, you mentioned something there that’s probably a segue into the next section. You mentioned security and you coined a phrase, confidential computing, I know what we’re trying to do in that space, but I think for our listeners, it’d be really interesting for you to maybe expand. When you say confidential computing, what do you mean? Just unpack that for us.

Ross Mauri: There’s a whole industry initiative around confidential computing. I’ll tell you what it means to me though. What it means to me is: That your data and your code, but especially your data, can be locked down so tight that no one can get to it or access it except for you with the right cryptography keys. Not a system admin with high privilege, not a container admin, not anybody. So, compromised credentials and insider attacks that we know take place, well we see that they take place every week, they probably take place every day and they’re just not public, there are lots of breaches going on. Confidential computing, when properly implemented, is going to eliminate those attack vectors and the leakage of that data, so that people’s data, whether it’s corporate data that has great financial value or it’s personal data or it’s medical data, it can really be locked down. That’s what it means to me.

What I really going for though, is: Confidential Computing alone won’t solve the problem. We’ve got our own secure enclave technology within Z. In fact, we released the fourth generation of that technology that we’ve been working on for 10 years, so the fourth generation was released this year, it’s running great. But, what you have to do with that, especially when you’re in a cloud environment, is you have to wrapper it so that it’s technically impossible for anyone to break in. If you look at today’s cloud environments, there’s administrative control and they have what’s called operational assurance that someone can’t get to your data. Operational assurance means: I signed a contract, and therefore I’m trusting the company that’s hosting my data, to follow the terms of the contract and not allow access, even though it’s technically possible.

As you see again, the insider threat is the biggest threat to compromise today. It’s whether the person is bad or the person is blackmailed or compromised, or their credentials get stolen through phishing or other social engineering and somebody gets in because they’ve got their credentials, they’ve got admin credentials. If you really implement confidential computing correctly and have it have the right wrappers around it, end to end, you can technically assure that no one can get to your data. So, in the IBM public cloud today, with our hyper protect services, which by the way, are all FIPS set of services, they’re all based on LinuxONE, it runs globally.

You can do things like you can keep your own key, bring your own key is interesting, when you bring your own key, the cloud vendor takes over control of that. I don’t know if you knew that. When you keep your own key, it’s your key and no one takes control of it, only you have access. If I lock down your data today, in the IBM Public Cloud, Steven, with hyper protect, and the US Government came to us with a subpoena and said, “We need to see Steven’s data.” IBM can’t get to it. The only way they can get to it is through you because you have the key. Confidential Computing, to me, is technical assurance that no one can access your data and keeps your data private.

Steven Dickens: Ross, that’s… you’ve articulated it really well and I think that Confidential Computing is where we’re going to see the industry going. Where do you see the mainframe in that? So, I think, specifically, maybe go down that one layer into the, because we’ve got a relatively technical audience who likes to geek out on this stuff, so maybe if you would just, what specifically have you driven the teams to do to give us that unique capability?

Ross Mauri: Right. The IBM Z has been known for its security for decades, and that’s been built around our core, banking, and financial services customers, that have to have everything locked down. So, let’s just take our HSM, it’s the highest grade, highest-rated commercial HSM that there is in the industry, and a lot of the other, again, I’m not going to get in deep into all the technologies, but the security technologies that we’ve had for banking, but I would say a proprietary way for banking to lock down those workloads, we’ve now taken those technologies and make sure they can either be accessed via Linux or a Linux workload payload running in one of our secure enclaves can inherit that security. So again, in our secure enclaves, all the data can be encrypted without any hit to performance.

I don’t care how many gigabytes you want to encrypt a second or decrypt, we can just handle that. That’s an important thing, it’s throughput, how fast can you do it? But, another thing then, is what standards are you following? So, obviously we’re following all of the key industry standards for encryption, but we’re also investing ahead, Steven. We’ve got post-quantum cryptography algorithms already in some of our HSMs that we’re trying out, and as you know, NIST hasn’t yet selected the final algorithms that will be the standard for cryptography in the post-quantum era. But, IBM happens to have, I think there’s seven that have been down selected to the final seven or so, two of them are IBM’s, they’re all lattice-based algorithms, so whatever one gets chosen, whether it’s IBM’s algorithms or someone else’s, we’ve already experimented with lattice-based encryption and cryptography, we know how to do it, and we’re working hard.

The next-generation mainframe is going to be post-quantum safe, and I think that that’s going to be a big step. So, where was I going with that? We started with banking technology that was really proprietary. We brought it into the open world, so it can be accessed easily, managed easily, through platform as a service in a cloud environment and we’re investing ahead of the curve. Because no one knows when quantum computers are going to be big enough and stable enough to actually break the cryptography algorithms of today. Some people will say it’s in the next five years, others say it’s in the next 20 years. I don’t really care. I want to have my data secure for the day that they do, the quantum computers are that big and that stable and they’re in the wrong nation-state’s hands or criminals’ hands. So, we’re investing in the future.

Now, within our secure enclave technology, we’ve done a number of things, I said we’re on our fourth generation. We’ve really made it so that we’ve got… We’ve got a DevSecOps pipeline set of tools, so you can very easily compile and bring together your applications and your data and you can insert in a confidential and secure and signed way. So, that I know that my code, when it goes in there, hasn’t been touched by anybody and I know that the secure enclave it’s in, whether it’s the hardware, the microcode, the millicode, the virtualization layer, whatever’s in there, is also signed and secure. So, it’s about security, security, security, and locking every single step of the element… Every element in the stage is down. Again, we could probably talk for hours, and you should probably get Marcel on here to go real deep if everybody’s interested in that, but I think confidential computing is what’s needed today. It is what’s needed in the cloud and on-prem, and I’m really glad that IBM is a leader in this area.

Steven Dickens: Ross, you have some fun on Twitter, and if anybody’s not following Ross on Twitter, they absolutely should, @rossmauri. I’m always amazed that you find the time to engage so much with the audience out there, but we had some fun with this and posted that we’d try and crowdsource, in the spirit of open source, we’d crowdsource some questions. One of the ones that came through from Pat Moorhead, who’s one of the founding partners of Moor Insights & Strategy, relates back to what you were just saying about confidential computing, so I’ll ask Pat’s question: Many companies claim they have confidential computing, what makes IBM’s version special?

Ross Mauri: I think it’s this: All of the claimed confidential computing environments out there still have security holes in them, and they’re not that hard to find what they are in the different layers. Some of them are very restricted by, I say, how big of a payload you can have in there, how big of a database, how big of an application? So, the memory space is not that big. Ours is extremely large memory size, payload size. How many terabytes do you want? These things can grow up very big. The second thing is: I talked about this end-to-end implementation, so you can have trusted computing. No one’s actually implemented that yet, but IBM. It’s one thing to have the core technologies in the microprocessor and the virtualization layer and the container level, et cetera.

It’s another thing to then wrapper it all together, so it can be secure end-to-end and basically guaranteed security. So, the difference is that, again, we’ve been working in this area literally for 10 years. This is our fourth generation, as I said. So, we know what holes are in everybody else’s and we’ve made sure that we fixed all of them. And we have red teams all the time, including the IBM X-Force red team, doing pen testing and other types of testing to ensure that they can’t get in, and I’m happy to say that they haven’t been able to get in, so it’s been a pretty good journey so far.

Steven Dickens: That’s interesting. I didn’t know that you were going to make that claim, that’s a good one, Ross. One of the other questions came through from an IBM, a Mark Martin, and it goes back to what you were saying about quantum, ask Ross: Will quantum replace compute power in data centers, making them become storage warehouses. So, kind of general question, but interested to get your view. It made me think, so I’d be keen to see your response.

Ross Mauri: Yeah, it’s interesting. I can’t look a hundred years out, but from what I know of quantum computers today, and they are very powerful and I have an active research program that I call Z plus Q, which is literally: How do we hook a quantum computer directly into an IBM Z and treat it as an accelerator? Where going with this is: Quantum computers are very, very, very good at some things, and they’re terrible at others. They’re not going to replace one for one traditional computing. I think, the computing paradigm for as far as the people that I talked to, can see, let’s look out 30 to 50 years, is there’s going to be classical computers and they are going to be connected to quantum computers. Think of the quantum computers as a really, really fast accelerator that is really good at certain types of algorithms, like Monte Carlo. They’re really good at some things.

I can see the Z plus Q for our commercial clients, let’s say banks, being tightly coupled. The operational data is on the Z, but then when they want to do some kind of super fraud analytics or something like that, they package the data, you shoot it over the quantum computer, it does its work in lightning speed, comes back, and gives you that answer, an answer that might’ve taken, if you did it in classical computing, a decade to do, quantum computer can do it in mere seconds or less than that. So, I see this collaboration, if you would, this connection and collaboration between classical computing and quantum computing, really being the powerful thing for business, again, as far as the people that I talk to, again, for the next 30 to 50 years, and I can’t look a hundred years out, I don’t know who can.

Steven Dickens: That’s interesting Ross. It leads to the next question that came from Alex Kim from one of our business partners, Vicom Infinity. His question was: What did we learn from our many attempts to merge other technologies into Z, and can we expect something again soon? He mentioned cell processes in 2007, maybe perhaps a GPU. So, maybe can you give that perspective?

Ross Mauri: He didn’t realize it, but he’s actually already seen it. What we learned from some of those experiments was that we can put other processors, accelerators, right onto the microprocessor. So in z14, we put a crypto engine right on the microprocessor to do bulk encryption. 16 gigabytes a second per core. Go ahead and feed the beast all you want. So, that was putting a processor on a processor. We put a compression processor on, we’ve put a sort processor on, and in the future, we’ll put an AI processor on, for inference. He’s already seeing it, he just didn’t realize it. We learned a lot from those experiments and we did learn that the technology is dense enough today, and we have the real estate to actually put coprocessor or accelerator function right onto the microprocessor and therefore you’re getting the great benefits of an accelerator, but it’s done almost inline with the instruction stream for the rest of the processing.

Steven Dickens: One of the other questions that came through was from Timothy out in Singapore. It was a great question, I’m looking forward to your answer on this one: What’s the most interesting recent example of a mainframe operator using their machine in some particularly clever way to solve an otherwise tough or impossible problem? Basically, what’s the biggest recent mainframe use case surprise?

Ross Mauri: I won’t say it’s the intern that was working on the bank that brought the jars for a game and had a test partition, and I won’t say which game it was but dropped him in there and they ran like mad. But, anytime you use a banking computer to play games on, that’s always an interesting one. But, this was an interesting one: One of our PhDs in research, she’s actually the head of IBM Z research for me, Donna Dillenberger, ran an interesting experiment. Out in the university world, there was a biomedical competition, it was an open competition, and there was a set of biomedical problems, genomic problems, that they wanted to solve. The students had a certain amount of time and they were given the test data and they had a set of algorithms and suites, but they could make up whatever they want to.

Where I’m going with this, it’s not so much about the competition, but what the competition asked for was for different companies to put up clouds that could be used for this competition over a week. Donna put up a very large Linux mainframe cloud with hundreds of virtual servers, and the experiment was: Let’s see if any of them realize that they’re running on a mainframe. Let’s see how transparent this is. Because you and I know Linux is Linux is Linux, but a lot of people out there, the uninformed out there in the world, think that Linux on Z is something different. But, this was like let’s do this blind, and let’s see if someone can figure it out. So, we put it out there. A week later, lo and behold, the winning team happened to use our cluster. It was random how the teams got assigned to different companies’ loaned clusters of these VM servers.

The winning team came back and they were interviewed and they said, “Why’d you win?” they said, “Well, we don’t know. But, when we ran our models, they literally ran twice as fast or faster than what we were used to, so we were able to iterate on our hypothesis in our modeling of what we were trying to prove out in this genomic sequencing thing they were doing, prove it out much faster, so we just made a lot more progress over the time that was allowed than the other guys.” So, the interesting thing for me there is all these medical students that obviously know informatics, information systems as well, the programming, none of them realized they were on anything but a normal Linux system. But, they noticed, wow, it’s way faster. So, that to me is one of the coolest things that happened in recent years.

Steven Dickens: That’s a great story. I’ve not heard that one.  I’m going to come back to your more details on that, that’s an interesting one, Ross. It gives me a great segue into the next section. One of the questions I always ask of the guests on the show, and you mentioned some of the students and academic community that gathered around the platform: What advice would you give to your younger self? So, we go back to Ross Mauri age 20, 21, 22, and you’ve got the ability to go back and give your younger self some advice, what would it be?

Ross Mauri: I came to IBM because I viewed it as the biggest sandbox in the world that I could scratch my itch of programming, and it was. There are lots of good companies out there where you can do hardcore, operating system level programming. But, the advice I would have given myself is: To probably stay technical a little bit longer. I was technical and did programming and test and design and all that stuff for about six and a half years, then I went into management, when you go into management, you really don’t do anything anymore. It’s not an honest job, the honest job was when you code. But, I would tell myself to even learn more about coding.

Because I think, even when you’re in senior management like I am now, the more you understand about your business, my business is IBM Z, there are other analogies out there everywhere, whether you’re in the cloud business or you’re in biomedical or you’re in banking, but know more about how your business really works, so that if your career goal is to be a manager, be an executive, run a business, be a CEO, the more you know about how it works on the ground, the better leader you’re going to be because you’ll be able to relate to people.

I say that because I see people that don’t actually have good technical backgrounds and they try to run technical teams. They’re good leaders, but you just can’t help the teams enough as a leader unless you can really understand… You don’t have to understand every bit and byte that they’re coding, but understand what they’re doing and be able to relate to them. I can still relate to the hundreds or thousands of developers that work for me that write millicode, write microcode, write operating system code, write middleware code and database code, and cloud orchestration code because I had a technical background and I use it. So, my advice to myself is: I should have stuck it out another three, four years, and just learned that much more before I went into management.

Steven Dickens: That’s interesting. I think that’s great advice for some of our younger listeners who are maybe starting to put their feet on their first career path, so thank you for that, Ross. I’ve asked this question over the last couple of years of guests of the show and I’m really looking forward to asking it to you. One of the questions I ask is: Look into your crystal ball, you’ve got that classic crystal ball we see in the movies. Where do you see the next three to five years of the mainframe going? What do you see as the future? As much as you’re able to talk about, Ross.

Ross Mauri: Sure. The truth is: We’re working on what we’re going to ship in the next six years already. We always work on the next two generations of the system. So z15 is out there and today we’re working on Z next and Z next next. Now, I’m not going to tell you what technology it’s going to be in and all the details, but I’ll tell you the areas that we’re going to make great strides in. I already talked about AI, great strides in AI. We’re going to make great strides, from a software point of view, in bringing full, open source, cloud-native development toolset to z/OS, it’s already on Linux. I want to take all those great tools that everyone uses for Linux and make it so that the programmers of tomorrow can really leverage those open source tools, regardless of what language it is. Could be COBOL, which I know some people think is ancient, but it’s actually a pretty good language still. But it could also be Python or Go or Swift or Java, whatever language you’re going to run on your mainframe.

Another thing we’re working very hard on is Hybrid cloud integration and IBM cloud integration. Hybrid cloud, to me, means really connecting clouds, and it means that’s done by Kubernetes containers. And that in the new programming paradigm services-oriented programming paradigms. So, we’re going to work a lot on hybrid cloud integration, we’ve already done a lot in the last year, but there’s always more to do. Then, I think there are some really interesting things that we can extend to the cloud, things that if you’re a bank and you run on-premises today, you depend on it, but some of those paradigms don’t exist in the cloud yet, we’re going to bring a lot of the paradigms for disaster recovery, other types of compute paradigms, again, that classical big businesses rely on, we’re going to bring that to the cloud. Because, we think we’ve got a leg up, because again, we know that our technology works, we know the algorithms in it, we know what bugs we’ve had to fix, we know what things didn’t work, so my team working with the IBM public cloud team are going to bring… We brought together the hyper protect services, but we’ve got a lot more up our sleeve. The answer is: There’s more open source in our future, there’s more cloud in our future, and there’s more AI in our future. And I already mentioned, we’re going to make sure that everything’s safe, quantum-safe.

Steven Dickens: Fantastic. I think that’s a great answer, Ross. I’m looking forward to seeing that journey over the next three or four years, from what I can see from what you’ve said, it’s going to be an exciting few years. I think this has been fantastic, what are we, almost 40 minutes now? I think I could carry on interviewing you, but we’d probably want to keep it to a section where the listener can consume this. So, Ross is there any parting comments, anything else you’d like to share with the listeners before we wrap?

Ross Mauri: I would just say, especially for those of you that are still coding, get out there and code. There’s a Linux community cloud if you want a free place to go and play for a while. There are lots of other tools, many of them free, but please get out there and code. Learn about the mainframe, learn about its strengths, learn about what it’s really good at, and which workloads you really should put on a mainframe, whether it’s on-premises or it’s in the cloud.

Steven Dickens: Fantastic. Ross, that’s been a really interesting a few minutes we’ve got to spend together today. I think our listeners are going to find this interesting, so thanks for joining us on the show. You’ve been listening to Steven Dickens interview Ross Mauri on the I’m A Mainframer Podcast. You’re going to be listening to the show in the first week of September, which is a week before the Open Mainframe Summit, which is on the 16th and 17th of September. Please go to openmainframeproject.org to register, and thank you for listening to the show.