Content Copyright © 2016 Bloor. All Rights Reserved.
This blog was originally posted under: The Norfolk Punt
I have just been briefed by CA Technologies on its “Mainframe Reframed” strategy (see also here). This is interesting, as CA Technologies is very aware of the issues facing the Mainframe in public perception but, at the same time, has customers that still rely on their Mainframes and even see them as a valuable resource going forwards. CA Technologies is basing its “Mainframe Reframed” strategy on a detailed investigation of the Mainframe market, including its Mainframe customers, their needs and their perceptions.
More news, is that the Mainframe business unit at CA Technologies now has a new General Manager, Ashok Reddy. In past lives, Reddy has been Vice President, Offering Management, API Economy & Hybrid Cloud Integration and Vice President, Product Management, Design and Business Development (which includes DevOps), both at IBM. This probably makes him a particularly good champion for modernisation and agility in Mainframe systems.
The issues the Mainframe faces when it has to deliver a “flawless customer experience” are fairly obvious (I’ve rephrased the CA Technologies version of these from my point of view; so there’ll be minor differences from the official CA Technologies line):
- Mainframes must be agile, in order to respond to changes in the business as they happen; but many current Mainframe change processes are hampered by clumsy and slow (but risk averse) change control and acquisition processes, reflecting a culture that has many good points (“don’t put working business at risk”, for example) but needs “refactoring”;
- Mainframe management and development talent is generally nearing retirement and must be replaced; cross training existing talent is one solution, as is educating people entering the job market from scratch (CA Technologies has initiatives in both areas);
- It is necessary to show Mainframe value delivery, to counter the perceived (and often erroneous) higher cost of Mainframe applications, but although there are well defined quantitative cost metrics (Gartner TCO, Total Cost of Ownership, for example), there are few widely accepted quantitative “Total Value of Ownership” metrics, and this biases organisations towards concentrating on cost control;
- There is a growing desire to mine Mainframe data for Actionable Insights and competitive business advantage; and to include it in the general Big Data environment for mobile and Hadoop-style access to an organisation’s data;
- The Mainframe is perceived (incorrectly) as only being suitable for obsolete languages and as providing poor support for Java and Linux;
- The Mainframe is seen as secure, but people are not always sure what information is stored there and how critical it is, and it is often not well understood anyway. Therefore, it is often not integrated into the general security picture – that is, Mainframe security risk is not well defined in the context of data security risk on other platforms (and the days of “security by obscurity” on the Mainframe are long past);
- As businesses expose more and more services to their customers, via the web and mobile interfaces, there is an increasing volume of non-revenue-generating business workload (e.g. balance queries) placed on that business’ IT infrastructure. So, IT must understand the nature of the costs borne by both the distributed and mainframe infrastructure, so it can balance the use of both technology and costing regimes to maximum advantage;
- There is a shift taking place in automation whereby organisations that have traditionally included Mainframe applications and management in an organisation’s general automation strategies, are now relying on automation not only to optimise and orchestrate business processes and business agility but also, in the IT rather than the business environment, to reduce reliance on deep mainframe-specific knowledge.
It seems to me that a lot of these issues have been or are being addressed in modern z-Systems Mainframes. Point Five, above, for example: Mainframes now have specialised Linux co-processors and can run Java on Linux on dedicated Intel-technology processors (in fact, the last two generations of Mainframe hardware (the zEC12 and z13) have the fastest clock speed and can deliver the best Java performance available); and CA Data Content Discovery (see here) can intelligently discover and document Mainframe data assets (without moving data off the mainframe).
The big issue, it seems to me, is cultural. Mainframe cultures are very conservative and may not mix well with general “distributed system” cultures. This means that many people with little Mainframe experience think of it in terms of a 1980’s S/370. It may come as a surprise to them to discover that a modern Mainframe has a faster CPU than anything they have in a PC, while still operating reliably at around 100% utilisation and regarding even five minutes downtime a year, for any reason including OS upgrades, as unacceptable (those last two were S/370 characteristics too, of course). However, even people experienced with the Mainframe may overlook what the latest models can do; and can also often overlook changing business requirements for mobile and Cloud technologies.
CA Technologies, along with vendors such as IBM with Bluemix (see here, for example), seems to be addressing both the technology and the cultural issues – and very much recognises that customer experience is now king, even in a Mainframe context. Remember that just under three-quarters of corporate data still resides in Mainframe systems.
CA Technologies is spending a lot of time investigating the views of its customers and has divided them into groups:
- Growers. These are increasing Mainframe use in the interests of greater revenue and understand the Mainframe environment and their usage of it well enough to negotiate ever better price/performance deals with their vendors. CA technologies says that they tend to be internal champions of new use-cases such as mobility and big data. They are most likely to be exploiting Linux on z Systems (a notable success story; z Series has specialised Linux co-processors available); and they are probably developing or expecting to start developing in Java on z/OS for new workloads. I would see these as the most sophisticated Mainframe users, and the ones most likely to look at new models for automated business, such as the Mutable Business, but I wouldn’t expect them to be in the majority (not yet, anyway).
- Sustainers. These are the peak of the bellcurve, and they are finding it hard to leave the Mainframe because they understand its value and the dependence of core business operations on the mainframe. They are experiencing increased pressure to deliver against business SLAs, which can best be satisfied on the Mainframe – so their usage is growing. They are focused on metrics and management for Mainframe resources and are often looking at managed services for legacy technologies.
- Decliners. These organisations are often experiencing internal, political, pressures to move off the Mainframe. They are interested in cost saving initiatives on the Mainframe but with usage declining, they’ll resist finding new uses for it. If their usage really is declining, exiting the Mainframe could be the right solution (although they will probably need migration assistance) – as an under-utilised Mainframe is very unlikely to be cost effective. Interestingly, according to Reddy at CA Technologies, “a good portion of these so-called decliners find it difficult to get completely off the platform for certain workloads but face significant skill attrition, hence these clients actively turn to service providers to manage their mainframe environments”.
- Global/managed Cloud service providers. This is a group that often finds that the capabilities of the Mainframe are attractive for providing the high levels of growth, scalability and security needed to support the SLAs their more demanding customers are now asking for. Reddy claims that “service providers are often a path of choice for Mainframe customers seeking to address the perceived overhead associated with self-managing mainframe assets, yet also realise the high value and benefits offered by the Mainframe”.
- I would add to this a fifth group: non-Mainframe-users, who are often similar to the Decliners group, with a strong internal political prejudice against Mainframes, and possibly a lack of Mainframe awareness; and, in many cases, a Mainframe may well be inappropriate in reality. However, with education in what the latest Mainframes can do, perhaps some of these could start to think like Growers.
The new CA Technologies strategy is intended to cater for all of these groups, although I would think that the fifth group, the non-mainframe-users, will be the toughest nut to crack. Reddy explains that, “the CA Technologies strategy focuses on three objectives that are key for organisations delivering a ‘flawless customer experience’ every time an application touches the mainframe. These are:
- “Easily creating secure access to Mainframe data and services, so as to enhance the end-user application experience for web and mobile applications;
- “Unleashing the power and value of mainframe-resident data;
- “Creating mainframe platform flexibility for the future.”
“In a nutshell,” Reddy summarises, “the CA Technologies strategy is about bringing the Mainframe into the future as a strategic asset for the application economy”.
My particular interest in all of this at the moment, however, is how Mainframes relate to the Mutable Business. This is a 21st century take on the business in a constant state of change, driven by real-time changes in the business environment. As Bloor associate Martin Banks puts it, “all businesses are now (even if they don’t accept it yet) in a state of permanent transformation to new business models”. Bloor sees this as the future for many businesses, and you might be wondering what part Mainframes could play in this.
Well, “simples”, as this ‘ere middle-European meerkat would say. The Mutable Business is facilitated by abstracted hybrid cloud platforms and doesn’t really care a jot about what technology is under the platform as long as it can support its desired service levels. However, as Cloud matures, cloud “assurance” is becoming seen as important and, at the initial “due diligence” phase of Cloud adoption, a company may have more confidence in its Cloud service providers’ ability to deliver on their SLA promises for things like resilience, security, reliability and growth, if they are running part of their Cloud platform on Mainframe servers.
At the same time, a Mutable Business, with a legacy – but reframed – Mainframe capability, that is only interested in agile services and profoundly uninterested in technology for its own sake may find that it can leverage its Mainframe investment by abstracting the Mainframe as hybrid cloud services and by outsourcing its Mainframe data centre and management (while leaving it in place, physically) to a hybrid cloud services provider. After all, the Mutable Business is enabled by abstraction/virtualisation and that is something the Mainframe has been doing superbly well for the last fifty-four years or so.