All Posts By

openmainframe

Open Mainframe Project Continues Momentum with New Members, New Open Source Projects, and Additional Support for Diversity

By | Blog

Phoenix Software, Syncsort, Western University, and Zoss Team LLC join as new members; Zowe Conformance Program now accepting applications

OPEN SOURCE SUMMIT, SAN DIEGO, Calif. – August 20, 2019 – The Open Mainframe Project (OMP), an open source initiative that enables collaboration across the mainframe community to develop shared tool sets and resources, is announcing four new members: Phoenix Software, Syncsort, Western University, and Zoss Team LLC; three new projects: Feilong, zorow, and TerseDecompress; the Zowe Conformance program, and continued support for internships and diversity in the mainframe community.

“The Open Mainframe Project is a focal point for deployment and use of Linux and open source software on mainframes,” said John Mertic, Director of Program Management for the Linux Foundation and Open Mainframe Project. “We are increasing collaboration in the mainframe community, developing shared tool sets and resources and making mainframes, with their underlying compute power, more broadly available. Recent successes and continued international support show fantastic progress in these areas.”

Hosted by The Linux Foundation, the Open Mainframe Project is comprised of more than 30 business and academic organizations that collaborate on vendor-neutral open source project with the mission of building community and adoption of open source on the mainframe. The Open Mainframe Project strives to build an inclusive community through investment in open source projects and programs, career development, and events that provide opportunities for the mainframe community to collaborate and create sustainability.

New Projects enable new collaboration opportunities

The Open Mainframe Project welcomes three new projects under its umbrella to expand how modern mainframe technology integrates with existing systems. Projects hosted by Open Mainframe Project have a vendor-neutral governance structure to encourage participation from a diversity of vendors and individuals, Feilong, TerseDecompress, and zorow are joining current projects ADE, Atom plugins for z/VM, Mentorship Program, and Zowe, nearly doubling the number of projects hosted by the OMP umbrella.

Feilong is a z/VM Cloud Connector that provides virtual resource management for z/VM. Users can manage the VM lifecycle dynamically and automatically without deep knowledge of z/VM itself through REST API. Users do not need to manually provision, manage, and destroy guests. Feilong also provides an SDK to make it easy to develop system management tools. Fundamentally, Feilong allows IaaS/PaaS solutions such as Openstack or Terraform to consume z/VM by providing REST APIs, making time to market faster.

TerseDecompress helps IBM mainframe customers uncompress large files like system dumps with the TERSE program on a mainframe. Normally, if the receiving does not have a mainframe in their datacenter it is not possible to uncompress files. With TerseDecompress the files can be decompressed on any workstation that supports JAVA. There is no need to have access to a mainframe to uncompress files that are tersed on a mainframe.

z/OS Open Repository of Workflows (zorow) is a new open source community dedicated to contributing and collaborating on z/OSMF workflows. Many tenured systems administrators use their own processes to perform common system management tasks. Workflows help to create efficiency and reduce the complexities of these tasks while enabling the transfer of knowledge from tenured systems administrators to early career professionals in a seamless and consistent way.

For more information on supported projects, please see: https://www.openmainframeproject.org/projects/supported-projects

Zowe Conformance Program launched to build a vendor-neutral ecosystem around Zowe

Open Mainframe Project’s Zowe turned one year old this month bringing excitement and energy to the global Zowe community with more than 712,000 pageviews and 4600 downloads. The project is ready to help members, incorporate it with new and existing products that will enable integration of mainframe applications and data across the enterprise. To ensure vendors are delivering offerings that align with the Zowe framework, Open Mainframe Project is launching a Zowe Conformance Program.

Each vendor can follow the Testing Guidelines to ensure their offering is aligned with the conformance standards developed by the Zowe community. Products achieving conformance will have exclusive logos and marks they can use in the promotion of their product, as well as be listed in the Zowe Conformance Directory. Vendors that have offerings that are a part of the initial launch include Broadcom, IBM, Phoenix Software, and Rocket Software.

You can learn more about Zowe Conformance Program at https://www.openmainframeproject.org/projects/zowe/conformance

A Growing Ecosystem

Four new members, Phoenix Software, Syncsort, Western University, and Zoss Team LLC, highlight the international reach and range of interest in supporting mainframe development across software consultants, vendors, and academia. If your organization is interested in joining the Open Mainframe Project, please see: https://www.openmainframeproject.org/about/join

“It’s a match made in heaven,” said Donna Hudi, Chief Marketing Officer for Phoenix Software. “The Open Mainframe Project provides exciting opportunities through the open source model for the next generation of mainframers and Phoenix Software prides itself in taking a leadership role in helping to address the z/OS skills gap. Our focus on leveraging the latest technology, creating modern solutions, and sharing our overall enthusiasm for the platform will be further enhanced by the opportunity to share our depth of knowledge and experience throughout the Open Mainframe Project community.”

“Mainframes continue to be a strategic part of data infrastructure, hosting mission-critical workloads for many of the world’s largest enterprises and working with an ever-increasing deployment of hybrid architectures,” said Dr. Tendü Yoğurtçu, CTO, Syncsort. “The Zowe framework is an exciting initiative with the potential to transform the approach to mainframe modernization and connecting to a broader set of applications as well as next wave platforms such as hybrid cloud and blockchain. Syncsort has expanded our membership in the Linux Foundation by joining the Open Mainframe Project, enabling us to contribute in the community effort to make core mainframe services and data interoperable with other systems and emerging technology platforms.”

“With Linux being one of the world’s largest software projects, we are pleased to partner with the Linux Foundation and Open Mainframe Project in providing our students with unique training opportunities,” said Hanan Lutfiyya, Professor of Computer Science at Western University. “Linux and open source will continue to lead the way as important learning tools for students around the world.”

Committed to Diversity

Part of Open Mainframe Project’s mission is to build an inclusive community through investment in programs, career development, and events that provide opportunities to underrepresented and disadvantaged groups around the world. There has been a lack of representation of women in technology that is especially more notable in the mainframe industry.

Recently, Open Mainframe Project partnered with SHARE on the launch of a Women in IT initiative at both the Phoenix conference last March and the Pittsburgh conference earlier this month. Both events were successful with more than 200 participants and Open Mainframe Project is committed to continue the support for women in technology You can learn more about Open Mainframe Project’s participation at SHARE Phoenix at https://www.openmainframeproject.org/blog/2019/03/06/encouraging-women-in-technology

Open Mainframe Project also strives to ensure its leadership is showcasing its diversity value in the mainframe community. Recently, both Meredith Stowell, IBM’s Director IBM Z Community Advocacy and ISV Success, and Anjali Arora, Rocket Software’s SVP of Engineering and Chief Product Officer, joined the Governing Board to provide strategic guidance to the project.

About the Open Mainframe Project

The Open Mainframe Project is intended to serve as a focal point for deployment and use of Linux and Open Source in a mainframe computing environment. With a vision of Open Source on the Mainframe as the standard for enterprise class systems and applications, the project’s mission is to Build community and adoption of Open Source on the mainframe by eliminating barriers to Open Source adoption on the mainframe, demonstrating the value of the mainframe on technical and business levels, and strengthening collaboration points and resources for the community to thrive. Learn more about the project at https://www.openmainframeproject.org.

About the Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation projects like Linux, Kubernetes, Node.js and more are considered critical to the development of the world’s most important infrastructure. Its development methodology leverages established best practices and addresses the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

 

###

I am a Mainframer – Stephen D. Hassett

By | Blog, I Am A Mainframer

In today’s episode of the “I Am A Mainframer” podcast, Steven Dickens sits down with Steve Hassett. Steve is the COO of GT Software.  Steve tells Steven about his journey with the mainframe, his thoughts about the mainframe, and it’s future.

Steven Dickens: Hello and welcome. I’m Steven Dickens and welcome to the I am a Mainframer podcast from the Open Mainframe Project. I have the pleasure today of being joined by the COO of GT software, Steve Hassett. Welcome to the show, Steve.

Steve Hassett: Thanks Steven. Thanks for having me.

Steven Dickens: Yeah, always a pleasure. Steve, just for our listeners, can you just give us a brief introduction to your role and kind of where you fit in the GT Software team and  just sort of get us orientated so we can get started?

Steve Hassett: Yeah, sure.

Steve Hassett: So if you’re not familiar with the GT software, we are a 35 plus year old software company based in Atlanta, Georgia. Our focus from day one has been on tools for mainframe software developers. Initially, that was for BMS screen mapping and help screens on the old green screen terminals.

Steve Hassett: Today our focus is really laser focused on mainframe integration, so making it easy to integrate those legacy applications through APIs to web, mobile, and other applications. Within GT I’m chief operating officer and I’m responsible for all the day to day operations, sales, marketing, technology.

Steven Dickens: That’s a good, great place to start, Steve. If you can give us a little perspective of Steve Hassett, the person, you know, I’ve obviously got your profile up in front of me here, but just give me a view of kind of where you’re based out of and a little bit of your personal journey and that will sort of give some color commentary to our listeners here.

Steve Hassett: Yeah, it, you know, it’s funny that everything comes around.

Steve Hassett: So, I started my career after college. I went to Rensselaer Polytechnic Institute in upstate New York and my first job was a mainframe software developer and that was a back in the day, and then I went back to business school at the University of Virginia. Did some work in M&A in finance and worked for a number of different companies and some consulting.

Steve Hassett: Then in 2000 founded an early SaaS web mobile software company. Sold that in 2004 and have been running businesses for a number of different software companies after that. And today I’m running the GT Software business and have been here for about two and a half years. So, I went from a COBOL to SaaS and now sort of helping COBOL integrate with SaaS. It all comes around.

Steven Dickens: So a journey, for sure Steve by the sound of it.

Steve Hassett: A logical journey.

Steven Dickens: Yeah, make it certainly a journey we’ve seen a lot more of in the industry right now and particularly in the mainframe space as people try and sort of embark on that journey to sort of open APIs, restful APIs, sort of connecting backend system of records if you will, to those front end systems of engagement.

Steven Dickens: So, maybe just talk me through kind of what GT Software are doing in that space. Just give me a flavor if you will, of kind of where you’re intersecting with that type of dynamic and how you’re helping your clients kind of move that game forward.

Steve Hassett: Yeah, so we have a tool called Ivory Service Architect and it’s the easiest way, we believe, by far the easiest way, to create REST and SOAP APIs that connect to legacy mainframe applications, so that you can create an API that’s exposed to the world so that you can create these newer cutting edge applications and do all your development around the mainframe. So, we’d like to think of it, and to borrow a term that IDC uses, which is to help create the connected mainframe. So, what that means is you keep that core system of record, but you do all your new innovation around the outside as opposed to trying to do the very heavy lifting of rewriting those legacy applications.

Steven Dickens: Hmm. And what are you seeing when you’re engaging out with those clients who are on that journey? Obviously a lot of our listeners are sort of at the intersection of open source and mainframe. How are you seeing that sort of RESTful API, SOAP API, kind of engage with those agile sort of DevOps-savvy, Cloud native, type players?

Steve Hassett: Well, we help in that quite a bit. Because what we enable the mainframe development teams to do is to adopt a more agile methodology when developing APIs and do it on an iterative basis.

Steve Hassett: So, it’s not a six month waterfall project where the end of it, you get an API. But it’s actually something you can build on and iterate on a daily basis and that helps them. It’s interesting, the big thing that I see and I hear every day is when I tell somebody what I do, they kind of give me a funny look and then … why would you be developing things for mainframes? And what they don’t realize is they interact with them every day and that if they’re working for a bank, they’ve got billions of lines of COBOL code and globally trillions of dollars of transactions flowing through those COBOL systems. And it’s imperative to develop the integration.

Steven Dickens: So, I mean it’s interesting, the title of the podcast is I’m a Mainframer and it’s interesting to talk about the reaction you get when you mentioned that you are a mainframer as you meet people. Can you sort of give us a little flavor of your personal journey and how you’ve sort of come to self-describe yourself as a mainframer? Just to give some … I suppose personal commentary that will help sort of frame the overall message for people.

Steve Hassett: Again, having started my career as a COBOL programmer and come full circle through SaaS and now to GT Software, the thing that attracted me to the company was the recognition of how hard it is to modernize legacy systems and how hard it is to integrate legacy systems without having the right tools in place. You know, and for me that was the number one thing that attracted me to this business in wanting to join the team and help steer the company in the future.

Steven Dickens: Okay.

Steven Dickens: And I suppose I’m trying to get underneath that. What do you see as those big challenges? Is it a perception challenge? Is it a technical challenge? Is it a commercial challenge? When you’re seeing that dynamic, the reason why GT Software exists, is it a combination of those factors? Or is it, as I say, a perception commercial or technical reason that you see is holding customers back most?

Steve Hassett: Yeah. That’s a great question because it’s something that I’ve encapsulated into what I call the five stages of mainframe grief.

Steven Dickens: So we pivoted to a grief counseling podcast, Steve, is that what you’re trying to tell me?

Steve Hassett: We have. Grief counseling for mainframers. And actually it’s for companies.

Steve Hassett: So the first stage is at a corporate level and It’s denial. They know they need digital transformation and so they’re going to try to rewrite the legacy applications or hand code integrations to try to modernize. And that doesn’t go very well and is not very fast.

Steve Hassett: So stage two is anger. And at this stage you’ve got the board driving the CEO and the CIO and the leadership team to modernize their applications and it’s not going well and it’s not going as fast as they expected. So they say, let’s rip it out, start over and move off the mainframe. And well that doesn’t go well either. And so in year five of the five year plan or the three year plan, they haven’t really gotten off the mainframe and they realize they can’t get off the mainframe.

Steve Hassett: They go into stage three, which is bargaining. And they think, well they can try to do it themselves and accelerate progress on their own and that doesn’t really work. And then the board realizes that they can’t get off the mainframe. And so they search for another alternative. And they go through a period of grief where someone says, how come we can’t modernize as quickly as we thought? And as a friend of mine who’s a software architect said it’s crazy to think that you can replace 40 years of legacy code in five years. And that’s a discouraging period for the organization.

Steve Hassett: And finally they come to the stage of acceptance and they realize that what they need to do is let the mainframe do what the mainframe does and build around those core systems and build RESTful and SOAP APIs around those core systems. And we are seeing that in every one of our customers, new and old, that they’ve gone through this process where they thought they were going to get off the mainframe and the board got engaged and wanted them to get off and it wasn’t possible. So, they’re searching for something new.

Steven Dickens: So I, I think I can, as I look at your Linkedin profile here, Steve, I think chief grief counseling officers is probably a better title than the one you’ve got written.

Steve Hassett: It requires some level of sensitivity.

Steven Dickens: Some level of sense. Very politically correct.

Steve Hassett: Yeah.

Steven Dickens: So now that’s an interest.

Steve Hassett: Really, the flip side of that is, we have our customers turning to us and saying, how can we better make the case for the mainframe? But they still have this pressure. And so, we try to work with them and help them develop those arguments which begin with what everybody knows: There’s nothing more reliable and scalable and faster than a mainframe. And it’s not just about the box, but it’s about the 50 years of applications you’ve built on that box that are designed for your business and your regulatory environment and there aren’t off the shelf applications that you can just plug in.

Steven Dickens: It’s interesting you hear so much about … It’s not so much as you would describe around this sort of 50 years of code. For me, I’d describe it as 50 years of business logic that’s been written into code that then executes on the platform. And I think that’s what … from certainly my interactions with clients, they tend to get ignored. If these were just COBOL programs, then it would be an easier migration. But what they are is a codification of the business logic. And in a lot of organizations you find that that business logic’s not understood, let alone the code that’s understood

Steve Hassett: Exactly. When I say 50 years of code, you’re absolutely right. It’s 50 years of business logic and the people that develop that logic are long retired. And it’s impossible to try to, nearly impossible, to try to figure out what the underlying behaviors you’re trying to get out of your systems are, which is why it’s a superior approach to try to just preserve that and build around the outside.

Steven Dickens: It’s a renovate rather than rip and replace is what you’re saying.

Steve Hassett: Exactly.

Steve Hassett: So if you move into or if you buy a new house or if you buy an old house and you have to decide what to do first, the last thing you want to do is upgrade the electrical and the plumbing. You’d rather figure out how to make that work as it is and then build a new kitchen. And in the mainframe world, we help enterprises invest in the kitchen rather than the plumbing.

Steven Dickens: Yeah, that’s a great way to describe it. Make the house look nice and a better place to live in rather than focus on the wiring and the plumbing that nobody actually gets to see.

Steve Hassett: Right.

Steven Dickens: Which is fair. And having done that in a previous house is very expensive for very little tangible return from an experience of living in the house.

Steve Hassett: Exactly. That’s where it came from. I’ve been through that as well.

Steven Dickens: You share the same scars by the sound of it, Steve.

Steve Hassett: Yeah.

Steven Dickens: Give me your perspective, if you would, of how you see the mainframe market right now. It’s always interesting to engage with the senior leaders of the open mainframe project membership and to sort of get a perspective of where they see the market. If you could give me sort of that market perspective, that would be great.

Steve Hassett:  So what we’ve seen is sort of the low hanging fruit of companies that could move off the mainframe, who don’t have the need for this stability and reliability and were able to buy commercial applications to migrate. Most of those have done that.

Steve Hassett: And now you’re dealing with the more complex organizations and the complex business logic where it’s more difficult.

Steve Hassett: So, for what we do, in terms of transitioning from mainframe to connected mainframe, we see it accelerating. And it’ll give you a couple of examples.

Steve Hassett:  In banking especially, we see two things driving it. One is Open Banking and one is real time payments. And Open Banking is the idea that you have create APIs to allow other companies to connect to your core systems.

Steve Hassett: And as you probably know, it’s the law in the UK. And it’s coming soon to Europe where every bank will be required to securely open up those core systems. And, that’s pretty profound. And it’s a complete change in mindset.

Steve Hassett: So, in the old days when you’d write a check, you give it to the bank or the correspondent would give it to the bank, and they’d batch them up, literally in a batch, and bring them to the Federal Reserve in the U.S. and the Federal Reserve would clear the checks and it would take a week or so, in a batch. And then they went to electronic images, but it was still batch clearing at the Federal Reserve and it took a long time.

Steve Hassett: Now consumers are driving the need and the demand for instant payments. So if you’ve used Venmo or Zelle, you instantly transfer money and it goes instantly out of your account into the new account.

Steve Hassett: And that’s taking over in in many aspects of payments and in a faster way than credit cards. And it means verifying funds exist from the payer and they immediately go to the payee.

Steve Hassett: But what a lot of people are realizing now is that impacts every other system. As you move from batch to real-time, you have to verify balances in real-time. You can’t have any latency so that you have $100 go in and $300 go out because the balances weren’t updated in real-time. You have to check for fraud in real-time. Verify identity. Make sure you’re not transacting with a restricted company, country, or person. And you have to do that all in real-time. And the systems weren’t set up for that but it’s addressable by proper integrations, both inbound and outbound from those core systems. And that’s a huge trend that’s driving our business.

Steven Dickens: And then how do you see the mainframe in that trend? Just maybe give the listeners a perspective of kind of how the mainframes kind of reacting to that trend, if you will, Steve.

Steve Hassett:                    So, what we’re seeing is building new capability, doing things like having the legacy system, it could be COBOL or PL/1 call out to a third party to check to see if a person is a terrorist or on a restricted list. And that’s a pretty hard thing to do. But it’s a critical thing to do.

Steve Hassett: And the other obviously is having systems call in to aggregate accounts and initiate payments from the other end. And what it’s doing is actually solidifying the position of the mainframe because again, you’re building these new capabilities around the outside and bolstering the mainframe with APIs to keep its position as the bank’s core system.

Steve Hassett: And I think it’s very, very beneficial and it helps accelerate the recognition that the rip and replace is not a good strategy. If you’re doing it for modernization there are better and faster ways to accelerate your business transformation.

Steven Dickens: So I think you’ve given us a really good perspective there, Steve. That’s been really interesting to listen to for me and I hope for our listeners.

Steven Dickens: One of the questions I always ask as we start to come towards the end of our time here together with the guests, is if you could look ahead, if you could have that classic crystal ball and look ahead to where you see the market going over a two, three, five, ten year horizon. Where would you see both GT Software and the underlying mainframe over that type of time horizon?

Steve Hassett: That’s a complicated question. There are a couple of things that are worth mentioning within that. One is what we hear from customers. One of the reasons that they’re trying to migrate off the mainframe is, and I think this is really interesting, is the perceived lack of talent in aging population. People aging out of being COBOL developers. And I’ve always thought that economics and supply and demand will fix that. And we’re seeing that happen. In that salaries are rising. Demand is there for the people with the skillset. And so we’re seeing more people go into learning these legacy languages. And in fact a really amazing piece of evidence of that is in Atlanta we have something that’s being developed called the Georgia Fintech Academy and it’s part of the university system of Georgia.

Steve Hassett: And when I first heard about it sounded very esoteric I think. How do you teach Fintech? Is that a thing you teach?

Steve Hassett: But what they’ve done is they went out to some of the larger financial companies in Georgia and said, what are your needs? What kind of people do you need? How can we help train them? And you know what the first mandate is? COBOL programmers.

Steve Hassett: So, we’re seeing that demand being met today. And so that’s number one. And so taking the lack of talent out of the equation in terms of a long-term reason to try to migrate. I see that not being a true issue within a couple of years.

Steven Dickens: Hmm. I tend to agree. I think just a free market economy. If college kids can see that it’s in a two to three x to program in COBOL versus it is in Python or Ruby or any of the Node.js or any other modern languages. You’ve got to learn one of these languages. Why would you learn a language where you can earn a third of what you can earn.

Steve Hassett:  Right.

Steven Dickens: We’re all coin operated to a certain extent. I can’t imagine learning COBOL is much different to learn in Python from a sort of length of time to get proficient. So why wouldn’t you go and take the higher paying job to be a COBOL programmer?

Steve Hassett: Right, exactly. Exactly right. And to extend that, part of the reason that folks had moved away is because COBOL programmers, mainframe programmers, administrators, were stuck in a lower trench of pay. And so it wasn’t attractive. But as you said, a language is a language. And what platform it’s running on isn’t all that really material to the satisfaction you get from creating something new and amazing. And as you have more ability to integrate both inbound and outbound to those legacy systems, you can create some new and extraordinary customer experiences. That drives a lot of people.

Steven Dickens: Yeah, I can imagine it would Steve. I can imagine it would. And I’m certainly seeing the same dynamic.

Steven Dickens: So if you see the skills challenge evaporating over a two to three year horizon, what else do you see in store for the platform?

Steve Hassett: Well, I see a huge demand for more interoperability, more connectivity, more APIs. That’s one of the reasons we’re here. And that being driven by not just banking but every business moving to more of a real-time operating methodology. That again solidifies the existing platforms and provides the ability to create new platforms around the outside. And sort of the way I refer to it as is going from a batch of hundreds and thousands to running your jobs as a batch of one. That’s real-time, execute one transaction at a time. And we’re seeing mainframes being adapted for that.

Steven Dickens: So Steve, as we look to wrap up, is there any other sort of parting comments? Anything you’d give maybe to some of our younger listeners as they look to embark on their mainframe career? Or is there any sort of sage advice you’d give as a mainframer of some sort of standing in the industry to them as they maybe potentially look to embark on a career on this platform?

Steve Hassett: Yeah. So this is one thing that I’ve seen that that is a real obstacle for, well not just the new people but the people that have been in the industry for a long time, is that they’re not deeply engaged with the strategy of the rest of the organization or even with the rest of the IT organization.

Steve Hassett: And I’ll give you an example of that. I was in Europe visiting customers last year and I was asking them about something called a PSD2–the Payment Services Directive Two in the EU, which is Open Banking in the UK. And I referred to that earlier as the legislation that requires opening up the systems. And the mainframe people we talked to, they were developing APIs to support this, but they didn’t know the underlying reason for those development initiatives and therefore they weren’t in a position to really prepare for it. And sort of the light bulb went on and they need to understand what the corporate direction is. Why, from an IT perspective, they’re moving in that direction. And then proactively be able to find solutions to help leverage the mainframe to solve those problems. And I think at any stage in your career understanding your company’s objectives and not just narrowly focused on delivering a requirement is critical to rising within an organization.

Steven Dickens:  Yeah. Understanding the impact to the business of what you’re creating and what your role is within the business. I think that’s very sound advice, Steve. I really do.

Steve Hassett: Well thank you.

Steven Dickens: As we look to wrap up, would there be any sort of final comments before we give the listeners back some time and get them back into their day?

Steve Hassett: Other than to repeat what we’ve said, which is excited about the future.

Steven Dickens: Fantastic.

Steve Hassett: And where we’re going to take this.

Steven Dickens: Well Steve, thank you so much for your time. Always appreciate it. Appreciate the support of GT software for the Open Mainframe Project.

Steve Hassett: Yeah. And we’re excited to be part of that and excited to shortly talk more about Zowe and the other things that are associated with the Open Mainframe Project.

Steven Dickens: Fantastic. Well, thank you Steve. My name’s Steven Dickens. I’ve been your host of the I’m a Mainframer podcast, brought to you by the Open Mainframe Project. You can click and subscribe and follow us on various platforms, including iTunes. So please take the time to do that and thank you again for joining us today and check back in the future for more I’m a Mainframer podcasts.

Steven Dickens: So signing off from me. Thank you very much and speak to you soon.

 

How to Add a UserID to z/OS Using a Zowe Script!

By | Blog, Zowe, Zowe Development Updates

Written by Daniel Jast, member of OMP’s Zowe community and zSystems Technical Specialist at IBM Worldwide Client Experience Centers

Here in Poughkeepsie at the Client Experience Center, our Z team is constantly looking for opportunities to modernize, by using new tools and technologies coming available on the platform. I am always looking for ways to leverage the Zowe platform to better manage the Z Infrastructure I supports. I provide many technical demos for Zowe, where we show off plenty of “Out of the box” functionality, as well as some extensions and scripts used in the  Zowe environment. I previously worked with my internal team and others on creating a common process to plug Jupyter Notebook web applications into the Zowe Desktop.

My team and I have now moved onto some scripting use cases, to better manage and automate tasks across their systems. One task that a Systems Programmer is required to do on a regular basis, is create UserIDs on a system, so a user can log on and get access to z/OS. Not only is this a repetitive task, but one has to know all the different system specific information, depending on which system the UserID is being added to. With help from my colleague Alex Lieberman, we created a base script for Systems Programmers to start adding z/OS UserIDs to systems. You can download and customize this script to fit your environment and can find the Github repository where this code is stored here.

 

Prerequisites: What you need set up

There are some things we are going to need to have set up on our z/OS System and on our local workstation before this script will run successfully. This section describes what to do prior to running the script. This script is written in bash, so you will need to have bash installed on your workstation.

Creating Zowe CLI Profiles

There are 3 Zowe CLI profiles needed for each system you are accessing. These 3 profiles are a z/OSMF profile, a SSH profile, and a TSO profile. To create the 3 profiles, you can use the following command syntax:

z/OSMF Profile:
zowe profiles create zosmf SYSNAME –host IPAddress –user ibmuser –pass myp4ss –reject-unauthorized false –overwrite

  • The z/OSMF Profile is used for the dataset and job API commands. These commands allow us to upload our JCL to the z/OS System, and then submit those datasets as jobs.

SSH Profile:

zowe profiles create ssh SYSNAME –host sshhost –user ibmuser –password myp4ss

  • The SSH Profile is used to issue Unix commands. We issue commands against the UserID’s home unix directory to change ownership and permissions.

TSO Profile:
zowe profiles create tso SYSNAME –account ACCOUNTnumber

  • The TSO Profile is used to issue TSO commands. We issue LISTUSER commands to verify our script is creating what we want.

Wherever you see SYSNAME in the profile creation commands, you should replace that with the name of the system(LPAR) that you are creating that profile to connect to. It is a good practice to name your profiles after the system you are connecting them too, so it is easy to differentiate between profiles.

Allocate a PDS to Store the Jobs

When looking at the script, you will notice that there are 6 different jobs that are uploaded to our z/OS and then ran. When our JCL is uploaded to z/OS, we need to put it somewhere before we run it. Therefore, allocate a JCL library where these jobs can be stored. You then need to change this value in 2 different places throughout the script.

First change the target JCL Library where you want these jobs to be stored when they are uploaded to z/OS:

zowe zos-files upload stdin-to-data-set “DANJAST.JCL(ZWEUSERX)”

Second, where the job is being stored when submitting the job:

jobid1=`zowe jobs submit data-set “DANJAST.JCL(ZWEUSER1)” –rff jobid –rft string`

Add REXX to Your System

If your organization uses SMS to manage where a UserID’s datasets should be directed, you will need a program to go into your ACS.SOURCE(STORCLAS) member to add the new UserID. Alex Lieberman has written a basic REXX script which goes into the STORCLAS member and adds a new line for the new user being added. The REXX to do so has been uploaded to the Github repo here. This REXX needs to be added to your z/OS System prior to script execution. You also need to point JOB 1 at the dataset you created for the REXX.

Change the following line of JCL in JOB 1 to point to your REXX location:

EX ‘SYSL.REXX(INACSSRC)’ ‘${username},${description}’

NOTE: You may need to edit the REXX script to fit your organization’s STORCLAS member.

Script Part 1: Set Variables
Scripting with the Zowe CLI commands is easy. Dan is a programmer with few years of experience and recognizes an opportunity for automation, which is why he writes simple bash scripts to automate z/OS tasks. The first thing we are going to want to do in any script we write leveraging Zowe CLI commands, is to set our variables. In this use case, we need to get the UserID the Systems Programmer wants to add to the z/OS System using the following code:

echo ‘What is the UserID you would like to add to z/OS? This must be less than 8 characters long.’

read username

echo The username you are creating is: $username

This takes in the user’s input for what the username will be, and loads that into the $username variable. Now throughout the rest of our script, $username will be replaced with what the user specified. This can be within Zowe commands, as well as within JCL in the script. Some of the other variables we are going to want users to specify at the beginning of the script include:
$system = What z/OS system we are adding this new UserID too

$description = Why are you adding this userid to your system? (Documentation/management purposes)

In setting the $system variable, we need to add some logic to allow for the user to specify which system is their intended target system to add the UserID too. Customize this this script to fit the environments you would like to exploit.

Script Part 2: Edit Script JCL

Throughout the script, there are 6 different jobs we are going to submit. These jobs, in their current state, have JCL that is in the syntax which the Poughkeepsie Client Experience Center uses for their JCL. Your organization may customize your JCL differently, therefore you will need to review all JCL in the script. As good practice, you can manually load these jobs into datasets and add a user. This way, you can ensure the jobs function the way you intend, and then any changes you make to the JCL you can put in your script. The purpose for the 6 different jobs submitted are as follows:

  1. JOB1: Submit a REXX program that edits the ACS.SOURCE(STORCLAS) member to add SMS logic for UserIDs
  2. JOB2: Create RACF Permissions / Work for new userid
  3. JOB3: Create the ALIAS for the UserID on the system. This is a CATALOG entry to allow dataset creation for the HLQ of the UserID
  4. JOB4: The ISPPROF dataset is needed by ISPF for each userid when using ISPF. When they logon, the logon proc will need this dataset to be active before they can successfully access ISPF.
  5. JOB5: To access OMVS (USS), the userid on PEL systems will attempt an auto-mount of a ZFS file in naming convention: ZFS.USER.<userid>.  So, when the user enters TSO ISH or TSO OMVS, auto-mount will attempt a mount with the following specifications:   mount point /u/<userid>  mounted to file OMVS.ZFS.USER.<userid>. The ZFS file is needed.  Create the ZFS file with the following batch job.  Be careful to ensure that words “aggregate” and “compat” are lower-case.
  6. JOB6: Add OMVS segment for userid. Remember the UID we got from TSO ISH?….. It was 209.  This is where that comes in to play.  We need to add an OMVS segment to the userid.  OMVS segment requires the UID.  Be careful to check the GID (group ID) with TSO ISH.  Be careful that the syntax is case-sensitive!

To access the script, please visit the following repo: https://github.com/dan-jast/zowe-cli-sample-scripts/blob/master/shell/AddUserSAMPLE.sh

Good luck scripting with the Zowe CLI! If you create complex scripts on your own to automate z/OS tasks, consider contributing those scripts back to the open source community so others can use them!

We’d love to hear about what you’re doing. Join the Zowe channels on the Open Mainframe Project Slack today!

My Internship at Open Mainframe Project

By | Blog

Written by Vedarth Sharma, OMP intern

These past few weeks were full of excitement and learning for me. The goal of my project is to make docker images for s390x architecture in SUSE Linux Enterprise Server 15 and automate the scheduled build process for clefOS images. The first half of the project was to make the images. We initially planned to build the images for openSUSE, b. But openSUSE had not been kept current with s390x. That forced us to switch to SLES15 instead.

My first challenge was to build a base image of SLES15. The base image will then get used by all of the other images. This was a challenge for me as I had never built a base image before. To accomplish this, Neale, my mentor, provided me an instance of SLES15 linux guest. I accessed the vm and wrote a bash script that will create a chroot environment, add repos to it, install essential packages and finally make a tarball. This tarball will then get utilized by our Dockerfile which builds the base image using `FROM scratch`.

After achieving this goal, I moved on to building more images for SLES 15. Then came MEAN stack. The tech stack of Mongodb, Express, Angular and Nodejs is very popular and we wanted it to be a docker image for s390x base SLES15. But we couldn’t find any official mongodb package for SLES15. I tried using mongodb for SLES12, but it didn’t work as it was unable to find a package `libcrypto` even though it was installed. We finally decided that we will wait for an official release of mongodb for SLES15 before building an image for mongodb and mean stack.

The next phase of the project was building a system to automate the process of building the images so that the images can remain up to date. I achieved it by building a pipeline in Jenkins. The pipeline is scheduled to run once every month automatically every whenever a commit is made to the codebase or. The pipeline executes the appropriate commands to build all of the docker images. This feature allows us to evaluate if a change is breaking the image and pipeline is failing, thus adding CI/CD support to the repository. I am building a similar pipeline for all of the images of clefOS repository as well which will keep the images of clefOS up to date.

Building the pipeline was a challenging part in itself. Jenkinsfile can be used to run bash commands and if the Jenkins server has access to docker daemon building images is quite straightforward. But the problem is you cannot push those images. For that you will need to do docker login and provide credentials which is quite insecure. Here is where docker build plugin comes to the rescue. In a scripted Jenkinsfile this allows you to build a docker image by simply writing:-

app = docker.build“(repo/name”)

This allows us to use the app and push the images to dockerhub by utilizing the credentials plugin of jenkins. But this plugin has a major problem. It only works in the root directory Dockerfile. So when I had to implement a solution for this what I did was simply create an app for each of my docker image and then copy the dockerfile and dependency files to the root directory run docker.build command and move those files back and repeat the procedure for other files. This is the only working solution that I have found which builds  the docker images and allows us to push them too from Jenkins itself.

I must say this has been a very challenging as well as rewarding project. Each day, I learn something new and I master it once I implement it a bunch of times. By working on s390x architecture based VMs, I have learnt how does a s390x system works. I am looking forward to working and finishing this project successfully.

Link to the code that I wrote :- https://github.com/openmainframeproject-internship/DockerHub-Development-Stacks

Zowe is now available on IBM Z Trial!

By | Blog, Zowe, Zowe Development Updates

by Pluto Zhang, Zowe community member and IBM developer

This blog post originally ran on the IBM Developer blog earlier today.   

Zowe™, a project of the Open Mainframe Project™, is now available on IBM Z Trial. You can get your hands on a Zowe trial on demand at no charge. This trial environment is a fully configured z/OS environment with Zowe preinstalled and set up along with a set of integrated hands-on tutorials. You can start your Zowe trial experience within hours of completing a simple registration to be assigned with a trial system for 3 days.

Figure 1. zTrial wizard for Zowe

Figure 1. zTrial wizard for Zowe

Register and try it out

Explore the project first

In this Zowe trial, you can experience cloud-like modern interfaces that Zowe provides for interacting and working with z/OS. You will also learn how to create and extend Zowe using sample plug-ins and extensions. This trial environment includes a sample Node.js API, a sample web application plug-in, as well as a sample Zowe Command Line Interface (CLI) extension along with all required resources and files.

The trial offers a set of basic scenarios with easy-to-follow instructions that explain how to complete the following tasks. Each scenario takes about 15-20 minutes.

  • Get started with Zowe, including Zowe Desktop and Zowe CLI
  • Extend Zowe with new APIs
  • Extend Zowe Desktop with new web application
  • Extend Zowe CLI with new CLI commands

New to Zowe? No problem! You can get to know the modern Zowe interfaces through several simple tasks in the “Getting started with Zowe” scenario.

Figure 2. Getting started with Zowe scenario

Figure 2. Getting started with Zowe scenario

Already familiar with Zowe? You can learn how to extend Zowe by creating your own APIs and applications by following the step-by-step instructions provided.

Figure 3. Extending Zowe API scenario

Figure 3. Extending Zowe API scenario

Figure 4. Extending Zowe Application Framework scenario

Figure 4. Extending Zowe Application Framework scenario

Figure 5. Extending Zowe CLI scenario

Figure 5. Extending Zowe CLI scenario

We hope you find this trial program informative in getting started with Zowe as you kick-start your journey toward becoming a developer, extender, and/or contributor to this new open source project on z/OS.

Register and try it now!

To learn more about the Open Mainframe Project, click here. To learn more about Zowe, click here.

Zowe™, the Zowe™ logo, and the Open Mainframe Project™ are trademarks of The Linux® Foundation. Linux is a registered trademark of Linus Torvalds.

Zowe Jupyter App: A Big Hit at the Poughkeepsie zModernization Summit Event

By | Blog, Zowe

Guest blog from Daniel Jast – IBM Z Systems Technical Specialist

The Poughkeepsie Client Experience Center recently hosted the annual IBM Z Summit Program Technical training event titled, zModernization Summit Event. At the training event, we provided 20 of the new IBM Technical Sellers in Z with hands on training in some of the “modernizing” applications on Z. These technologies included CICS, DB2, z/OS Connect, z/OSMF Zowe, Spark, ICP, Docker and more. The week let participants take a cobol application running on z/OS, expose the applications services into APIs, and then create a containerized application with those APIs which was then deployed out to the IBM Cloud. The application that was being run was a stock trading application which allowed users to create an investment portfolio, add stocks to the portfolio, and so on. We then let users get a hands-on experience on the Zowe Desktop and CLI, seeing how easy and simple completing complex z/OS tasks can be for new users by utilizing Zowe.

On the Zowe Desktop, we had created an IFrame application that had the Jupyter Notebooks running right on the Zowe Desktop. Jupyter Notebooks are used to exploit Spark to analyze data sources of your choosing. We had the Jupyter Notebooks analyzing Investment portfolio data which the users had created unknowingly as they worked through the technical labs throughout the week of training.

Training participants thoroughly enjoyed the Zowe and Jupyter Notebook piece. After getting previous training on “green screen” interfaces to z/OS, the Zowe Desktop and Zowe CLI were very well received. The audience was majority straight out of college, matching the target audience for the Zowe platform. When participants were asked what their highlights and key takeaways from the week of training were, some mentioned Zowe explicitly:

 

“Being able to learn more about Zowe and more innovative technologies integrated with Z.”

 

“Highlight: Zowe presentation!”

 

“My favorite (presentation) was the Zowe presentation”

 

“I think I really enjoyed learning about Zowe, and the hands on lab are super helpful in understanding!”

 

If you are looking to add Jupyter Notebooks to your Zowe Desktop, you can follow the github documentation posted here: https://github.com/zowe/jupyter-app

 

You can also follow the custom documentation written by the Client Experience Center’s Alex Lieberman(Alex.Lieberman1@ibm.com) and Dan Jast here: https://ibm.box.com/s/wo583vfw4klc2jg7l0zxo0k55yvw681b

 

If you would like to go through most of the Zowe Hands on Lab which event participants went through, you can request a demo from the Client Experience Center Demo Portal located here:

 

 

NOTE: If you are not an IBMer, you will not be able to participate in this demonstration without your IBM representatives help. You can contact Dan Jast (daniel.jast@ibm.com) for further information.

Join us in Welcoming the 2019 Open Mainframe Project Interns!

By | Blog

We are excited to introduce the 2019  Open Mainframe Project interns! This year, we welcome 9 global students – each paired with mentors from OMP member organizations such as Red Hat, IBM, Sine Nomine Associates and SUSE who designed a project to address a specific mainframe development or research challenge.

Welcome interns and we can’t wait to see what you do this year! Here’s a look at this year’s students:

Name: Priyanka Advani

Project: The Compliance Project

Priyanka is pursuing Master’s in Computer Science from Santa Clara University and has 7 years of experience working in mainframe industry.  During her industry experience, worked on Insurance and Banking Industry in Test Data Management, ETL Process, Database Refreshes, Data Obfuscation, and many Z-series Automation and Development projects. Prior to that, she received her Bachelor’s degree in Computer Science from India. She is a core technical person by heart and always excited to learn new things.
 

Name: Kautilya Tripathi

Project: The DockerHub Development Stack

Kutilya Tripathi is a Backend developer who is also known as knrt10 in the open source community. He has learned most of his developer skills while working on personal open source projects or contributing to open source organizations on GitHub. He values open source because for him it’s a way to give something back to the community. He has a wide online presence and an active contributor to open source organizations. You can follow him on Github(knrt10) to learn more about his ongoing open source projects.

Name: Naveen Naidu

Project: Boost Context Module Implementation for s390x

Naveen is a third year Computer Science Undergraduate from Bangalore, India. He goes by the name @Naveenaidu on the internet. He’s inquisitive by nature and has a burning desire to explore various fields to help people benefit from technology. He is an open source aficionado and is the core-developers team of coala(an open source static analytic tool). He also was the 2018’s Google Code In mentor for coala.

When Naveen is AFK (Away from keyboard), he spends his time giving talks and conducting workshops promoting the Open Source Community and it’s advantages. He loves watching animated movies and reading Fantasy Fiction( Lord of Rings being his favorite).

Name: Vedarth Sharma

Project: The DockerHub Development Stack

Vedarth is a very enthusiastic person. Whenever he sees something new he tries to learn more about it. He has been contributing to open source for last two years. He got to learn so much from different projects that he has contributed to and is still finding it interesting to explore new projects. He loves programming and keeping up with new technologies is his hobby.

Name: Yash Jain

Project: Zowe Features Addition

Yash Jain is an software engineering intern with the Open Mainframe Project working on Zowe Feature Addition. He has contributed to Kata containers and has worked on VesitLang, a teaching aid which provides visualization for common graph algorithms. He is a computer engineering student at the University of Mumbai, India. Apart from programming, he loves to play chess and has also participated at the Commonwealth Chess Games.

Name: Usman Haider

Project: Zowe Features Addition

Usman Haider is a graduate student and has experience in programming languages including Python, C, C++, Qt, Typescript, HTML and shell scripting. He is a user and programmer of FOSS for more than 5 years. He loves to contribute in open source projects. He participated in Google Summer of Code and contributed his work to GNSS-SDR project. Usman is a past intern of Open Mainframe project. He is a member of Linux Academy. Usman has just started his journey in mainframe and is interested in Zowe development. Among his interests are Linux development, embedded Linux development, open source software development and packaging, machine learning, and cloud technologies.

Name: Shivam Singhal

Project: The Compliance Project

Shivam is an avid open source contributor. He is 3rd year CSE bachelor student.He is bug squasher @ Mozilla Addons Ecosystem,  Mozilla Reps and part of the Featured Add-ons Advisory Board. He loves to hack Firefox Rendering Engine. He lives on internet by the name `championshuttler`.  He loves to meet new people, connect, discuss, network and grow, mostly at conferences and tech meet-ups. Most of his weekends spend in Hackathons.

Name: Sladyn Nunes

Project: Big-Endian Support for BoringSSL

Sladyn Nunes is a third year CSE Undergrad from Mumbai University. He enjoys contributing to open source projects and has contributed to coala and honeynet as well as famous repos like git-bug. He has an affinity for competitive programming and the adrenaline rush it brings. He goes by the name sladyn98 on the internet. 

Name: Dan Pavel Sinkovicz

Project: Cloud Foundry Operator for Kubernates on Z

Dan Pavel Sinkovicz is  a Computer Software Engineering student at the University of Northampton.

Learn more about the mentors and the project here

Dalian University of Technology Expects to Deliver New Zowe Tools

By | Blog, Zowe

Guest blog from Kun LU, Ph.D., Associate Professor, Director of Innovation Practice Center, School of Software Technology – Dalian University of Technology – China

When it comes to the Best DevOps Open Source Project, do you think it has anything to do with the IBM mainframe? Well, it does.  Open Mainframe Project’s Zowe, the first open source project based on z/OS, was named as a one of the finalists for the “Best DevOps Open Source Project” category by Devops.com.  Zowe is an open source software framework that provides solutions that allow development and operations teams to securely, manage, control, script and develop on the Mainframe like any other cloud platform. Leveraging the Zowe framework, we (DLUT) plan to develop and contribute a few tools to Zowe to help developers be more efficient, productive and agile in their daily work on z/OS.

We will work closely with IBM CSL CICS team and expect to deliver the following tools as Zowe plugins for CICS customers, first for the China market, in 3Q/4Q this year.

CICS Statistics Visualizer plugin

In the past, CICS health status check needs to be done manually by CICS experts who have profound CICS skills and experience.  The workflow includes understanding the system design, configuring CICS statistics programs and collecting CICS statistics data from CICS sample region at service peak time by uploading statistics data and analyzing the data, and finally generating reports and diagrams.  With CICS Statistics Visualizer plugin on Zowe Web UI, all the work can be done automatically.  The plugin also supports statistics data comparison of a single region at different time intervals, which can largely simplify the work the customer used to do before to monitor and analyze the system, like CICS health status change after CICS upgrade or configuration change, the trend of system resource utilization over time etc.  Furthermore, key CICS health indicators can also be queried by a simple shell command line through Zowe Command Line Interface (CLI).

CICS Liberty Debugging Assistant plugin

In the past, the customer needs to collect and store a list of debugging information from different storages and locations to troubleshoot a CICS Liberty problem.  This includes CICS Job log from mainframe JES, JVM profile and Liberty configuration files, STDERR/STDOUT/JVMTRACE files and Liberty output files like messages.log, trace.log and ffdc from zFS, MVS system dump accompanying with JAVADUMP, SYSDUMP and CEEDUMP dumps depending on the problem.  In addition, CICS trace level needs to be configured and Interactive Problem Control System (IPCS) must be used to format the unformatted dumps.  With CICS Liberty Debugging Assistant plugin, all the debugging related configuration, debugging information collection and dump formatting can be done conveniently by one-click operation.

CICS Application CI/CD plugin

Nowadays, CI/CD is widely used in software development.  More and more customers have increased demand on the z/OS development CI/CD to modernize the mainframe development, integrate with the distributed platform development, and assist the full stack developer to work with the mainframe services.  Zowe acts a good platform to eliminate the operation gaps to enable the z/OS development CI/CD, which provides some plugins like CICS and DB2 plugins to help simplify the application development.  As Git is widely being used for the distributed development, to centralize the code management, Zowe plugins (or cli) can be built for mainframe DevOps, like:

  • Trigger the build automation and end to end test automation (e.g. online COBOL or batch COBOL)
  • Trigger the env update when there is a CICS or DB2 configuration update
  • Deploy update into the multiple environments

Expose z/OS Connect EE APIs in Zowe API Mediation Layer

By | Blog

This blog was written by Andrew Smithson, an active member of Open Mainframe Project’s Zowe community and technical lead for z?OS Connect Enterprise Edition. This blog originally ran on IBM Developer’s website and can be found here

One of the key features of Zowe version 1 is the API Mediation Layer which provides a single place where you can find all the APIs that are available on your mainframe and access them from a single well known HTTP endpoint. When you first install Zowe, you get the APIs for working with data sets, Jobs, z/OS MF and the API mediation layer itself. If you want to add your own APIs, such as the administration APIs for a z/OS Connect EE server, you can use Zowe to add an existing API without having to change anything in the server that provides the API.

The configuration file contains the information about the server that is displayed in the API Catalog as well as the URI the server is available on. This file is then placed in the config/local/api-defs directory inside the API mediation layer installation directory. The API can be made live by sending an HTTP POST request to the /discovery/api/v1/staticApi of the discovery service. If you use the httpie client, the following command can be used.

http POST https://<hostname>:<port>/discovery/api/v1/staticApi

To make it easier to get started a sample configuration file is available on GitHub. This file can be updated with your server-specific details and then installed into the API Mediation Layer to add your own z/OS Connect EE APIs.

If you have questions or would like to learn more about Zowe, join the community Slack channel.

BUPT Students join the Zowe Community with a specific project in mind

By | Blog

Written by Kuang Jian, Dean of School of Software Engineering, BUPT

This is a guest blog post by Kuang Jian, Dean of School of Software Engineering at the Beijing University of Posts and Telecommunications (BUPT), a university directly under the administration of the Ministry of Education (MoE) and co-built by the Ministry of Industry and Information Technology(MIIT) in China. It is a comprehensive university with information technology as its main feature, engineering and science as its main focus and a combination of engineering, management, humanities and sciences as its main pursuit, which becomes an important base for fostering high-tech talents.

This blog will provide an overview into how BUPT works with the IBM China University Program and how students were introduced to Open Mainframe Project’s Zowe, an open source software framework for the mainframe that strengthens integration with modern enterprise applications.

Dean Kuang, students & IBMers at the IBM China Systems Lab

BUPT established a partnership with IBM China’s University Program in1996 and has participated in several of the program’s initiatives. In fact, the team from the School of Software Engineering in BUPT won the second place award in the ICBC-IBM FIN-Tech Contest in 2018.

Through the FIN-Tech contest experience, we had some basic knowledge of IBM Z and learned the value it brings to the market. Unlike distributed platforms such as Linux, it’s not easy for students in a university to have hands-on experience on the mainframe. So when Open Mainframe Project’s Zowe was introduced to us, we found it very attractive and wanted to know more.

From initial research, we learned Zowe was released last August to break the silo and reduce the steep learning curve of IBM Z. The Zowe framework provides interoperability and using the latest web technologies among products and solutions from multiple vendors. It helps enable developers to use the familiar, industry-standard, open source tools to access mainframe resources and services. An innovative framework based on a modern technology stack, Zowe provides the perfect opportunity for students to become the next generation of mainframers.

In addition to learning more about Zowe, the IBM China Systems Lab invited us to visit the China Systems Center for a mainframe tour. It was an amazing experience that inspired our students to join the Zowe community.

IBM Z expert introducing IBM Z14 and Storage DS8000 to Dean Kuang & students

We are collaborating with IBM to develop a native Zowe application plugin with configurable page and widgets to view data provided by IBM Z automation and monitoring products. We recently kicked off this project with three postgraduate students, who have already been assigned to work together with IBM advisors. We believe our students can complete the project successfully with their passion and the support from IBM and Open Mainframe Project. We are also eager to extend our knowledge and skill to open source and IBM Z and contribute code back to the Zowe community. Stay tuned for more details and updates…