Synergies with the Mutable Business - prompted by Agility and APIs at CA World 2015

Written By:
Published:
Content Copyright © 2015 Bloor. All Rights Reserved.
Also posted on: The Norfolk Punt

Bloor sees the future of automation as driving the “Mutable Business”. What this means, according to Bloor associate Martin Banks, is that “all businesses are now (even if they don’t accept it yet) in a state of permanent transformation to new business models”.

This implies that businesses should be in the business of managing this transition, of managing change driven by software innovation. This is enabled by technology: continuous delivery; actionable insights from Big Data; identity and access management and security analytics and so on. However, remember that this is not enough: identifying success factors and success metrics; managing people’s (other employees’) expectations; continuing to comply with regulations (maintaining good governance as you change) and so on is all part of it. And, you have to stay in business (keep existing customers using legacy interfaces happy) as you transition.

As a validation of the mutable systems concept, I am impressed with the synergies it has with the latest CA Technologies vision (at CA World 2015) of “rewriting the business” with software. This requires agility in software production, and the possibility of continuous delivery (although this is not obligatory; the business is in control) if changes in the business are going to be mirrored immediately in new software (if they aren’t, this implies that software is an inhibitor of business change).

So we need DevOps, and control of continuous delivery depends fundamentally on a feedback loop between business user experience and software design/development. The trouble is that application performance management is often rather crude – it is based on simple baselines and whether performance is better or worse than the baseline. The user experience associated with a single, short duration response time peak, say, no matter how large, is better than with a long duration slowdown in response time, even if less severe; and the user experience may be dire even if the baseline acceptable response time is only approached but never quite exceeded.

As a result  of this thinking, CA APM (Application Performance Management) is now using less simplistic metrics, basically “how much and for how long”, which correlates much better with how performance is perceived. Its approach (which Chris Kline of CA Technologies tells me is “patent pending”) also considers trends – acceptable performance with a rising trend is an indicator of serious and continuing later problems – possibly deserving more attention than a short, high, peak in response time. Not announced, but being investigated, is pattern matching, when different performance patterns can be recognised and correlated with user experience. Potentially, this could even identify particular design “antipatterns” for correction during continuous delivery (this would implement a richer and wider DevOps feedback loop, using what are, in effect, actionable insights into the development process).

DevOps is about implementing software change, but implementing the right changes effectively rather depends on the availability of trusted data through well-defined APIs. A rich source of trusted data is the “systems of record” still held on the Mainframe, in the majority of large enterprises. However, the Mainframe is often seen as a “black box” and its data is often not well understood and not very accessible.

The first step in making its data available is bringing the Mainframe – seen as just another enterprise computing resource – into the management environment of the business as a whole. So, we now have CA Unified Infrastructure Management for z Systems which promises to “provide comprehensive end-to-end visibility of business services that span mobile-to-mainframe environments”. This really does look like the mainframe as just another server, but there is another issue to deal with: the Mainframe may be pretty secure, but not all companies are aware of exactly what data is on there and what its implications are – they rely on nobody unauthorised being able to get at it, largely because most hackers don’t understand (or target) the mainframe.

Well, that approach doesn’t really bear examination, especially as many hackers are now being used by criminal “businesses”: I can remember sometime in the last century shocking an auditor by screen-scraping Mainframe data onto my hard disk and I also knew a way to steal Mainframe passwords by gaming TSO (IBM’s Time Sharing Option interface to the Mainframe) – security by obscurity doesn’t work.

Of course, Mainframe data should all be in a database, with its semantics and risk profile clearly defined in a data dictionary, but I bet many organisations have got slack and don’t know what data they have and are even using VSAM (Virtual Storage Access Method) or other flat files for some of it. This stuff may be hard to get at but once someone has found a poorly secured terminal emulation owned by someone in authority, it may be wide open to exploitation. The first step in addressing this is to know what you have, and to classify it, so we now have a compliance solution, CA Data Content Discovery, which looks at what you have on the mainframe and uses clever algorithms to recognise personal data, regulated data and so on, so you can secure it properly and report on it to the regulators. This tool, potentially, can do a lot more than this, however. If you are going to make mainframe data more accessible to new businesses services – rewriting the business with software – you not only need to know it is available, you need to know whether it is regulated and whether you are allowed to use it and whether special security applies. CA Data Content Discovery can help you do this and, because it is written to be accessed from other applications, it could, in time, become part of tools which federate mainframe data with every other kind of “big data” in the enterprise. Many core areas of large businesses still run on Mainframe data and applications; if the business as a whole is to be “mutable” or Agile, then this Mainframe data must be released to the business as a whole, through APIs, but without losing good governance.

This immediately suggests a problem, as not all of the technology areas in a business will be changing with the same velocity; and we must have the ability to make software change at the same rate that the business is changing. So, what happens if our changing software component interface is to something over which we have no control (an external service, perhaps) or changes at a lower velocity (a Mainframe database service, perhaps)? How can I test my component as I build it, as required by Agile development practice?

The answer is to simulate the behaviour of the system. Not only can you now validate the behaviour of what you are developing early (while it is still cheap to fix) in the context of the whole business system it will eventually be part of, you can also involve stakeholders more completely as you are simulating what the business sees, as well as the interaction with the app you are developing. Simulation, with CA Service Virtualisation,  now seems central to the DevOps experience as envisaged by CA Technologies (take a look at the set of presentations here); it gives you a sort of “wind tunnel” to ensure that your software will “fly”, even before it gets near to production. CA Service Virtualisation is getting increasingly sophisticated; it is beginning to “learn” the behaviour of a system and the characteristics of the data produced, or expected, by the system APIs.

A related issue is the need to simulate test data – it must match the characteristics of real data accurately but you can no longer just copy data out of production databases (in fact, this was always bad practice, partly because of the security implications, but it is a practice that could put you in breach of EU data protection regulations, as well us upcoming EU banking regulations, if appropriate). More than this, testing now needs to be Agile and fully automated, which implies that sanitised test data is available as and when needed, without delay. Philip Howard at Bloor looks at the new requirements for automated testing and test data management here – it’s an area where CA Technologies has the capabilities needed for agile testing, particularly since its acquisition of Grid-Tools.

Companies are already using these sort of capabilities to change the way they work – and the speed with which they can accommodate change. GE’s transformation from an industrial company to a digital company is a case in point. I was particularly interested in its use of “digital twins” – digital (data) models of physical systems – to enable “no downtime” maintenance (see here). This could be generalised for business software – many of the “what if” questions can be asked of a digital twin, and resolved, without bringing down or disrupting a production system.

OK, so CA Technologies is not the only company adopting these approaches to agile business delivery, but it seems to be telling a particularly coherent story in this area. I think my main concern is that this doesn’t become a “mindless revolution” – automation is important but the organisations adopting it must have high maturity and discipline if they are going to manage this journey to the digital company effectively. However, the evidence is that some companies are well on their way, although I expect that there’ll be a long tail (the nature of IT seems to be that the new takes ages to completely replace the old, in practice).

Even so, as Bloor analyst Peter Abrahams says: “business is mutable if it can easily change the products/services it offers and how it offers them. To be able to do this the money, people and infrastructure need to be enablers of the changes and absolutely not an inhibitor”. If businesses are going to be reinvented by software,  nothing in the software development and delivery process should get in the way of enabling change; although businesses should have the freedom to change at their own rate (subject to the pressure from the competition).