The Linux Foundation Projects
Skip to main content
Blog | I Am A Mainframer

Jeff Henry: I am a Mainframer

By | February 5, 2018

In our latest “ I AM A Mainframer” interview series, Jeffrey Frey, Retired IBM Fellow, chats with Jeff Henry. Jeff Henry is a Vice President of Product Management at CA Technologies.  Jeff is responsible for Intelligence Operations and Automation mainframe portfolio of products, including Mainframe Operational Intelligence and Dynamic Capacity Intelligence. Jeff and  Jeff discuss the biggest challenge for the mainframe going forward. Jeff, also talks how to best utilize the mainframe to coexist with the distributed world and the Cloud world.

If you’re a mainframe enthusiast or interested in the space, we invite you to check out our new community forum.

Create a profile and post a selfie with your mainframe system, and you will receive an exclusive “I Am A Mainframer” patch.

Get Your Patch
THE OPEN MAINFRAME PROJECT

Jeff Frey: Welcome to another edition of the I Am a Mainframer conversation series, sponsored by the Open Mainframe Project. I’m Jeff Frey. I’m a retired IBMer, an IMB fellow, and previously the CTO of IBM’s Mainframe platform. I’m also very much a Mainframe enthusiast. It’s my pleasure to host the Mainframe conversation series.

Before we get started, let me tell you a little bit about the Open Mainframe Project. As a Linux Foundation collaborative project, the OMP is intended to help create a Mainframe-focused, open-sourced technical community. It’s also intended to serve as a focal point for development and the use of enterprise Linux in a Mainframe-competing environment.

The goal of the project is to excite the Linux community around the use of the Mainframe and to foster collaboration across the Mainframe community, develop and exploit shared Linux tool sets, resources, and services in a Mainframe environment. In addition, the project seeks to involve the participation of academic institutions to help assist in creating educational programs and developing the Mainframe Linux engineers and developers of tomorrow.

For today’s conversation, we have the pleasure of speaking with Jeff Henry. Jeff is the vice president of product management at CA Technologies. He’s responsible for the intelligence operations and automation Mainframe portfolio of products, including the Mainframe operational intelligence and dynamic capacity intelligence. Jeff joined CA in 2015 and has over 28 years of industry experience leading software organizations, specializing in business services delivery, cloud deployments, and bringing systems of engagement together with systems of record for enterprise customers.

Jeff, welcome to the broadcast. It’s great to have you on today. I’ve been looking forward to this discussion with you.

Jeff Henry: Thank you very much. We certainly have a shared background with some of our years at IBM.

Jeff Frey: Yes, sir. I understand that. Speaking of background, to get started, why don’t you tell us a little bit about your background and then specifically what you do at CA Technologies?

Jeff Henry: Sure. I’ve had about 30 years in the industry, the first 26 with IBM. Most of that time period was spent going between strategy or product managements and back into engineering. It would basically be: start a strategy and then go execute on it. I’m not one to be able to hand off those things very easily. I like to see it make its way out into the marketplace. In that time period, it was a mix of distributed and Mainframe software across the WebSphere and Rational type of brands.

I moved into CA technologies about three years ago, and I’ve been blessed with two of the hottest projects we have in this space: one around the operational intelligence, so how we’re using machine learning within some of our operational intelligence products to help our customers move more from a reactive to a proactive way of managing their systems; and an inorganic acquisition we did last year focusing on dynamic capacity intelligence to allow our customers to help maintain those SLAs but at a cost that’s predictive and makes sense, so being able to balance some of the budget increases that they were seeing that have gone unchecked.

Jeff Frey: Very good. That’s interesting. The Mainframe has kind of a reputation for being very well instrumented, and with capabilities being introduced all the time, it seems like it’s a great platform for that kind of intelligent management capability to build dynamic intelligence of various types on top of the platform. It’s great that CA is following that.

Jeff Henry: I completely agree. The products that we’ve built over the years certainly drive a tremendous amount of valuable data themselves, but that data is usually hidden behind the scenes for only the vendor’s products to ever really use. So bringing that data forward, looking at that data across the board so that it’s not patchwork of 40 product monitors sitting in your knock, but really being able to get much more of an understanding of how that data works together when that data’s related and then opening that up to external data sources and platform itself.

The platform itself is instrumented, as you said, quite well, so being able to take advantage of the data that’s on the platform and share that and some of the cross-platform investments our customers are making has certainly opened up a broad range of opportunity, but certainly a way to get more proactive in nature about this.

Jeff Frey: Very cool. It ties in nicely, I would imagine, with some of the other things we’re going to talk about today. I would imagine, with the advances that are being seen in business intelligence and data analytics, that a lot of that has to do with what you’re doing and introducing even more intelligence to the raw data that’s being collected so it’s really meaningful for users to understand what’s going on.

Jeff Henry: We’ve seen a bunch of users start projects … I’ll give an example of a customer in generic form. I was over in Europe and they were very proud to show me what they had done with a bunch of the SMF data in Splunk. They had done a lot of historic forensics and built a dashboard that was, unfortunately, more historic viewing, but being able to show when they were able to see abnormalities.

It was a very well put-together project. The problem was they had to do it. When we showed them where we were going with Mainframe operational intelligence and bringing in the data from the various performance management, network management, storage management, and then platform themselves capabilities, we were able to take that quite a bit further to be able to show them how it’s built into the product and they didn’t have to go about building that themselves.

Now, that said, I don’t think anybody is looking at this and saying there’s going to be one data lake to rule them all. It’s how we federate these systems and insights working together. One of the key things for us is taking an open API approach to be able to stream some of that stuff out to systems like Splunk to do additional forensics on it or stream other data in, like through the SMF records or other competitive products. We certainly aren’t looking at this as a pure CA solution, but an open ecosystem to help support the Mainframe overall.

Jeff Frey: Speaking about open and management technologies and cloud computing and all that’s going on at an accelerated rate in the industry, what do you see as the biggest challenge facing the Mainframe today going forward?

Jeff Henry: I think a lot of it comes down to skills. Are we able to continue to drive the platform forward without demanding a unique set of skills with every product that we bring out? We’re seeing an aging of our workforce in many cases, so the SMEs that have been driving these things for 40+ years are aging out of the workforce. We’re really trying to make sure that we can work with customers in four different ways there.

The first one is just, obviously, we need to get the knowledge of the Mainframe into more of the universities. It’s a platform that runs 70%-plus of the mission critical data. We don’t see that changing. We see most of our customers actually investing and growing, but continuing to be challenged with those reducing budgets. The second place is, how do we just make our products more intuitive? Instead of having all of those various monitors that 40 SMEs have to get on a bridge call every time that something goes wrong, how do we help the customer see how that work comes together and in a much more intuitive way, in a modern, all-user experience? Not just user interface, but certainly that’s important as well.

The third area there is, how do we focus on technologies like machine learning that you don’t have to code everything into a pattern or static threshold, but the system can learn based off of the pattern it sees, based off of the augmented intelligence that it gets from those SMEs so that some of that tribal knowledge can be hardened and passed on as part of the automation that’s in the program? That really ties to the fourth one. If we can automate more of the simple stuff and get it out of the way, then those SMEs can spend the time where they really enjoy spending the time and more of the hard optimization-type products/problems or capabilities than being on a bridge call that, 80% of the time, it wasn’t their problem to start with, and they didn’t know it until five hours later.

Jeff Frey: No, that’s great. Let me push on this just a little bit, Jeff, so the listening audience can also have this perspective. In terms of what you just stated about challenges or the context for improving the operational capabilities of the platform or the IT environment as a whole in terms of skills, etc., do you think the Mainframe has larger obstacles in its way relative to other platforms or other systems in the industry, or do you think …

It seemed to me that the challenges you talked about kind of apply universally across all of the platforms and technologies. Is this challenge more difficult for the Mainframe, do you think, or, because of some of its capabilities, do you think it’s better positioned to take advantage of some of this stuff?

Jeff Henry: I actually think both, but let me not ride the fence and jump off and explain what I mean by that. On the one hand, the Mainframe has always been a unique black box with qualities of service. Security’s the easiest one to pull out because it’s always been known as the most secure platform. But as it’s used and more and more mobile, I would see solutions where there’s this starburst effect going back and tapping the data. That data is no longer just in a black box.

How do we make sure that we can treat the Mainframe just like any other platform? For instance, when we built this Mainframe operational intelligence solution, it was really focused on: it doesn’t need to be Mainframe. It’s running on Linux, it runs on Linux X86, it’s running in AWS for our trial environment, it’s running microservices in a Docker containerized fashion. So there’s nothing “This is your grandfather’s Mainframe product” about it at all, but it can consume, and the focus of it was on consuming Mainframe data to start. Now we’ve expanded upon that with our brethren in the other platforms to be able to bring in logs from distributive platforms, logs from cloud aspects, and continue to move that forward.

Mainframe itself, I think, brings some unique skills, characteristics. If you look at the people that are your typical users over the last 40 years, there’s so much tribal knowledge there that has been unique, that somebody coming out of a university just can’t pick up that quickly. We all have associate program to help folks learn that tribal knowledge that has built up over the years on how to leverage this unique black box with this incredible qualities of service, but as we move forward, how do we make sure that we just treat it like any other platform and make it easier for folks to come on board and take advantage of those unique qualities of service like we would any platform or any device?

I think there’s a balance that we’re trying to strike here. I got into some of the later aspects around how to leverage machine learning, how to leverage automation, of course. That’s consistent on every platform.

Jeff Frey: That’s great. Let’s talk about some of the environmental considerations as the Mainframe evolves. IT environments are evolving quite rapidly as well, especially with, as I mentioned, business analytics, more advanced uses and capabilities around data and data intelligence, but also in terms of cloud and this movement towards utility service models and making IT a more consumable, service-oriented set of services available to clients.

Give me your thoughts on how we could best utilize the Mainframe and the Mainframe’s role and coexisting in a heterogeneous distributed environment, and especially in terms of its role in cloud-based environments or cloud-based computing.

Jeff Henry: This is an area that we’ve been partnering up with, with IBM and various launch events, and our CA World event, other launch events as well. I think some of the key areas that we’re all seeing in uptake workloads, if you will, that the Mainframe has unique advantage on, are things like machine learning and analytics. I say that in the fact that if 70+% of the world’s critical data is on the Mainframe … And, to a large degree, when you look at that from, whether it’s a retail shop or a bank of customer PII-type data, well, we don’t really want to have that moving around.

That just causes, obviously, a lot of privacy concerns. It causes additional latency, and if you’re not just doing historic forensics, which people have done in data lakes that have been moved off into the cloud, but you’re trying to use that in a real-time or near-real-time environment, having the analytics processing as close to the data can be quite advantageous, both from a security standpoint as well as a latency standpoint. We’re seeing more and more customers pick that up.

Then when you look at how to leverage some of these things and some of the new promising innovations around Blockchain, that’s another one that we’ve seen some pretty good examples of what we can do. One of the prototypes that we showed in CA World last November was taking this Mainframe operational intelligence and including some of the ledger information, some of the streams from a Blockchain and simply being able to show: when does something look abnormal?

From learning on how a typical ledger system performed, we can define a baseline pattern, and then when something is abnormal, be able to highlight that, whether that’s a security access or whether that’s a performance management hiccup. It’s something that you can bring together fairly consistently. Those two type of workloads, we’re seeing more investment from a Mainframe, in the Mainframe platform as unique characteristic. But then being able to pull that across into the cloud … because we all have lived in our silos in the past. Those days are largely over.

We see very few new applications that don’t bridge across mobile or IOT and cloud environments, distributed environments, and Mainframe. Being able to show how this stuff works together, that’s one of the areas where I’ve seen Linux and some of these … We talked about our investment leveraging Docker or other open-source environments. Making sure that that’s consistent across the platforms allows just that much more flexibility for vendors and our customers.

Jeff Frey: That’s cool. I’ve kind of an opinion about some of the cloud stuff for quite some time and the way people tend to think about cloud and therefore tend to think about the Mainframe’s role in a cloud-based environment, and Jeff, if we could take just a couple minutes, I’d like to get your reaction to this and see if we can have a very short discussion on this.

I run into so many people who talk about cloud as if it’s a place, a location. They’re going to move something to the cloud. For a Mainframe shop … For a lot of our clients, I think that that notion, in my opinion, firstly, it’s just not a very good definition of cloud, but secondly, it scares people into thinking that they’re going to lose either control or operational control, or they’re going to have to move stuff off to some other environment or where somebody else has to manage it, etc.

I’ve always felt like we’ve kind of missed the point there, that cloud is really a service delivery model, not necessarily a mechanism or an approach to pick up your IT and move it somewhere else. Especially for Mainframe clients, we have … As you are no doubt aware, some of our clients, they are the best IT shops that you can imagine. They’ve been doing it for years. They’ve got a lot of experience. They know the IT business and they’re experts at running IT shops. I always thought that we should be doing more to emphasize introducing service-delivery models in the existing IT shops of our clients rather than talking about this as moving stuff off or moving stuff to someplace else to be managed by some other service organization.

I think IBM in particular has this view where, obviously, IBM wants to be in the business of being a service provider, but I think the other side of that coin is to make our clients service providers and make them more efficient in a cloud-based environment. Let me have your reaction to that. Am I oversensitive to that problem, or … Give me your reaction.

Jeff Henry: If anything, I think you drive a good point of view there. If we remove ourselves from thinking of, “There’s a distributor, there’s a mainframe, and there’s a cloud platform,” and you do think of it more as a, “How do I manage a delivery environment where I can spin up and spin down more flexibly and I’ll charge more accurately based on a deployment model called Bob?” It doesn’t matter what it’s called, but it allows that kind of morphs. It’s not a fixed hardware. It’s not a fixed software. It can be spun up and spun down more virtually, of which, quite frankly, the Mainframe has always been ahead of some of the innovations there and in a much more flexible environment.

If we look at what the cloud has brought to us and not just drive that as this external service-delivery model, but something that all of our big customers are trying to do … Even if you look at what our internal system providers have done for managing CA development environments or managing IBM development environments, they were trying to do the same thing. Many of our customers are doing that for their internal line of business and moving towards a shared environment.

So doing that in a way that virtually allows them the flexibility that a cloud-defined delivery model, absolutely. Absolutely agree that that’s what they’re all shooting for. And if you can have something that’s not sitting on your floor to allow you that much more flexibility in a mixed environment, great. But let’s start with what you’ve already got on the floor.

Jeff Frey: Very good. I think we’re probably about out of time here, Jeff. Let me just tell you that it’s been a real pleasure talking with you. This has been great, and I think the listeners will find this very valuable.

Thank you very much, and I appreciate you being on the call. For those of you who are frequent listeners to the call, we’ll have more of these, I’m sure. But for now, I guess we’ll say that’s it. Thanks again, Jeff.

Jeff Henry: Thank you as well. Look forward to future ones of these.