Content Copyright © 2023 Bloor. All Rights Reserved.
Also posted on: Bloor blogs
I’ve just been at the Open Mainframe Summit – virtually and in spirit, anyway. I was impressed by the enthusiasm of the presenters and the way that they didn’t limit themselves to a mainframe silo. The modernized mainframe really is “Enterprise Server 3.0” (ES-3) – just another server.
Perhaps the most obvious news is that in 2023, the Open Mainframe Project, which is part of the Linux Foundation, will have an IBM z15 mainframe of its own, available for testing any open source project on ES-3. The hardware has been donated by Broadcom Mainframe Software Division and it will be hosted by Marist College. “This valuable donation is a significant investment to our community and serves as an accelerant for our communities like Zowe, COBOL Check, GenevaERS, Zorow and the COBOL Programming Course,” said John Mertic, Director of Program Management at the Linux Foundation.
- Zowe is a new open source software framework that provides solutions that allow development and operations teams to securely, manage, control, script and develop on the Mainframe like any other cloud platform. Zowe is the first open source project based on z/OS.
- Cobol Check provides fine-grained unit testing/checking for Cobol at the same conceptual level of detail as unit testing frameworks for other languages, such as Python, Ruby, C#, and Java.
- GenevaERS is the single-pass optimization engine for data extraction and transformation on z/OS.
- zorow (z/OS Open Repository of Workflows), is a new open source community dedicated to contributing and collaborating on z/OSMF (OS Management Facility) workflows.
- The COBOL Programming Course is an open source initiative under the Open Mainframe Project that offers introductory-level educational COBOL materials with modern tooling.
Useful projects and I think that Open Source will be a real catalyst for mainframe modernisation to Enterprise Server 3.0 – and for changing its culture.
Diversity, Equity and Inclusion
There’s a new acronym in town: DEI (Diversity, Equity and Inclusion). Sharra Owens-Schwartz (Vice President, Inclusion, Diversity & Equity, Rocket Software) explained how it matters for the most effective, growth-oriented companies. I guess that having someone with Diversity in their job title is a start (and note the diversity in the ordering of the acronym), as long as the position has the power to effect change. It is possible that the apparent correlation between Diversity etc, and success is not causal (the sort of company that is successful is also the sort of company that takes Diversity seriously), but I think it is more than that. For a start, a Mutable company needs the best brains to allow it to manage constant evolution, and DEI maximises the talent pool available to it. More than that, however, it needs people who can think out of the box, think beyond conventional solutions, and that comes from different life experiences and from people with differing points of view, collaborating.
Nada Santiago (Principal Product Manager, IBM LinuxONE Hardware Solutions) told us that “in recent years, sustainability has leapt to the forefront of corporate priorities. Customers, employees and investors are all pressuring companies to communicate more clearly and show results”, although he also quoted a recent IBM business value study that showed that, while many companies have a sustainability strategy, only 35% of them “can be considered “trailblazers” who are acting on that strategy”. I see a new focus on sustainability, for the Mutable organisation, as part of the emerging focus on all possible stakeholders, not just the obvious ones; and on business and societal outcomes, not just on technology success.
Think mainframe and you think security. The modernised ES-3 can be made very secure, but only if it is managed, in the context of organisational security policies. According to David Wheeler (Director of Open Source Supply Chain Security, The Linux Foundation) in his keynote: “Software today is under attack, both while it’s in operation and in the supply chain leading to operations”. He particularly talked about some of the ways to evaluate open source software security before building it into your corporate systems. My brief interpretation of these is:
- Do you actually need to add more OSS to your environment? OSS is not “free”, it has associated costs of ownership and risks, as well as benefits.
- Are you evaluating the version of the OSS that will actually be used (do you have processes to manage OSS versions and updates)?
- Is it being actively maintained (look at the quality of the open source project it is part of)? Alpha- or Beta- in the name is a bad sign; as is a release date over a year ago.
- Is there evidence that its developers are taking security seriously? For instance, has it (or is it working towards) an OSSF Best Practices badge? There are many more indicators at the URL above.
- Is it easy to use this OSS securely (the user experience matters; and are security options set “on” by default)?
- Is there a process (with instructions) for reporting vulnerabilities?
- Is it widely used?
- What license does the OSS use – and what obligations does it place on you (for instance, does it make any software which embeds it into OSS)?
- How good is the OSS code (one of the benefits of OSS is that you have access to the code)? You might attempt to run it in a “sandbox” in the hope of triggering any malicious code.
Something worth following up is the Open Source Security Foundation (OpenSSF). This proposes 10 Investment streams to support Open Source Software (OSS) security:
- Making education and certification available to all.
- Establishing a public, vendor-neutral, objective, metrics-based risk assessment dashboard for the top 10,000 (or more) OSS components.
- Accelerating the adoption of digital signatures on software releases.
- Establishing an OpenSSF Incident Response Team of security experts to assist open source projects accelerate their responses to newly discovered vulnerabilities.
- System scanning, accelerating the discovery of new vulnerabilities by maintainers and experts through advanced security tools and expert guidance.
- Conducting third-party code reviews (and any necessary remediation work) of up to 200 of the most-critical OSS components once per year.
- Transparency, by coordinating industry-wide data sharing to improve the research that can help determine the most critical OSS components.
- Improve Software Bill of Materials tooling and training to drive its adoption.
- Enhancing the 10 most critical OSS build systems, package managers, and distribution systems with better supply chain security tools and best practices.
Which leads me into some thoughts about the governance of our software estate and the importance of operational processes, in a world that seems to be promoting the Dev side of DevOps rather than the Ops side. Remember that Dev gives you potential value, but Ops is what is delivering the actual value to the business. And you can’t claim to manage, let alone secure, what you don’t know you have.
Surely no company is unaware of what software it is using? Well, perhaps it also depends whether the right people know, but I asked Kylie Fowler, a friend who is a real software asset manager, what she thought. Her opinion is that:
Every software asset manager has the experience of a contract renewal appearing from nowhere, unrecognised software deployments and ELPs [Effective License Positions] replete with software you had no idea you had bought!”
She has even taken pity on these poor neglected ‘orphans’ with a video about and finds a home for them within the Software Asset Management system [here]. In the short term, the issue may be whether you are properly licensed or not (and this matters for Open Source Software as well as commercial software) but in the longer term, this sort of situation puts your whole Governance picture at risk. And good governance is a requirement for regulators, business partners and investors – for all stakeholders, in fact
I’m thinking that perhaps it should be devOPS rather than DEVops – and perhaps we should be developing mutable (evolving) Target Operating Models as part of delivering business outcomes. According to Gary Burke in 2017 (this stuff isn’t new): “An organisation’s Target Operating Model (TOM) is the operational manifestation of its corporate vision and strategy – what it wants to do, how it wants to do it, where, when, who with, who to etc. Realisation of its TOM will enable an organisation to achieve its corporate objectives and goals.” [here]. This is what links technology architecture to business outcomes, it seems to me. And if the organisation is Mutable, so must its TOM be, too.
Essentially, an organisation’s Current Operating Model (COM) must be regularly reviewed and the journey to the TOM evaluated – but although you can probably describe your TOM in detail 6 months or a year out, only high-level planning is possible after that – after 5 years, there may be considerable “TOM drift”. This is a good thing if it is in response to changes in the business environment, less good if it is the result of a lack of strategic focus in the organisation.
So, in summary, mainframe modernisation is in a healthy state, facilitated by open source initiatives such as Zowe, as long as it is in the context of a TOM (or equivalent strategic direction for technology), developed in the context of business outcomes.