A new episode in the life of the Mainframe

Written By:
Published:
Content Copyright © 2020 Bloor. All Rights Reserved.
Also posted on: Bloor blogs

In the course of a long career, I have watched the “death of the mainframe” with wry amusement. My first job in an IT department involved an IBM 360, and mainframes were going strong when I left Banking for creative writing around 1990 (although one of my first editors refused point-blank to believe that my previous employer, a large international bank, was still using one).

Now, well into the 21st century, mainframes are still around, still with basically the same architecture, but with very different capabilities to the old IBM 360s and 370s I started on. Beware of people selling mainframe replacements, who are comparing a 21st century distributed solution with a caricature of a 20th century mainframe.

These days, you should think of the mainframe as simply a very performant, very flexible (it can run native Linux, or you can plug in AIX “blades” if you like) server with particularly good parallel processing and virtualisation capabilities. You can plug in specialised hardware co-processors just for tuning Linux or encryption, for example. All that matters is understanding your workloads and putting them on the most appropriate platform, and sometimes that will be a mainframe, albeit (possibly) accessed as a service from someone else’s data-centre.

Nevertheless, some mainframe users do report issues of cost, inflexibility and overheads with mainframes. A lot of this is due to vendors’ past pricing models; exploitation of mainframe lock-in (some workloads have been too big to run on anything except a mainframe, and moving such workloads anywhere else is non-trivial and risky); and poor management of mainframes by mainframe customers.

I don’t believe that a mainframe, especially these days, is inherently inflexible, expensive, or wasteful of resources, but it can be made so by poor mainframe software asset management and mismanagement of poorly understood workloads. One mainframe antipattern is over-licencing, because you are licensing for contingency loads; another is putting workloads in the wrong place, because (usually) the workload characteristics or the capabilities of the platforms available are not well understood.

I don’t have space here to go into the complexities of managing a mainframe as part of a holistic technology platform but all the necessary tools and expertise are available from third parties such as Broadcom (which now owns CA Technologies) and Compuware (which was highlighting its approach to Mainframe DevOps when I was at DevOps World in Lisbon a couple of months ago). As Compuware says: “If your DevOps ecosystem doesn’t already include your mainframe, it should. Mainframes aren’t going away; in fact, workloads are increasing. Meanwhile, enterprises are only replacing about 1/3 of the skilled staff they lose through retirement and attrition”. There’s a Forrester report highlighting the continuing importance of the mainframe here.

Two key things to remember are:

  • That workloads should always be placed on the most suitable platform (and that this platform may change as the workload changes as a result of Mutable business evolution); and
  • Customers must have freedom of choice, even if vendors might not always like this much – no lock-in.

Now, I’ve just met a new start-up, albeit one with access to a long history of mainframe expertise, which is promising to make much of the above much easier.

This company is VirtualZ, privately owned, and “the first women-owned independent software vendor in the 60-year history of the mainframe”. I was talking to Jeanne Glass, its Founder and CEO; her co-founder and CTO is Vince Re, who I remember from his 31 years as Chief Architect and SVP at CA Technologies.

What VirtualZ does, in the short term, is identify over-licensed products (licenced for multiple LPARS or datacentres, for example) under-utilised products (such as a PL/1 compiler that is rarely used but has to be kept “just in case”). VirtualZ targets batch applications that don’t process massive amounts of data and consolidates licenses on the most cost-effective and suitable platform. Then, with no changes to the existing application, the application is simply redirected to this platform, with the most cost-effective licensing. This platform can be on premises or a mainframe-as-a-service on a public cloud, VirtualZ doesn’t care.

To start with, VirtualZ is just picking the low-hanging fruit – there are a lot of cost savings to be made from merely optimising mainframe software licensing for batch programs.

Longer term, I can see many more opportunities from this approach and VirtualZ itself talks about extending the idea to high volume transaction-processing applications and so on. The key idea is that VirtualZ’s redirection is a “black box” for both users and operations – nothing changes for them. This should help reduce the overheads of managing mainframe software assets.

In the future, I could see a virtualised environment, not limited to the mainframe, where distributed programs can access, say, hardware encryption coprocessors on a mainframe as a service and the mainframe becomes simply part of a holistic processing platform (and “moving off the mainframe” becomes as trivial as not redirecting applications to one, if that makes economic and processing sense). I don’t know, however, whether VirtualZ will ever want to take things this far; at the moment just optimising workloads and licensing for real mainframes is probably a good enough business model.

Nevertheless, I think that VirtualZ has a good idea for now, and I will follow its development of this idea with interest.