Twenty Years On – a Maturity Journey

Written By:
Published:
Content Copyright © 2009 Bloor. All Rights Reserved.
Also posted on: The Norfolk Punt

OK, I have a secret. It’s called GC26-4531-0 and it describes the solution to increasing software complexity, the prevalence of inaccurate requirements and poor design information, the increasing difficulty in managing change and the lack of integrated development tools. It does this with modelling tools that build an enterprise architecture model validated through prototyping; analysis and design tools that take business requirements through to application and database designs; code generators and knowledge-based products to actually build systems; and automated testing and maintenance tools, to keep them working.

It uses an entity-relationship approach for persistent storage of metadata and an object-based architecture, with common methods that can be shared by many tools. And it’s an open architecture, so you don’t have to buy all your tools from one vendor.

Perhaps you’ve guessed it by now, I’m at IBM’s recent Rational Software Conference in London and I’m thinking about IBM’s software development platform. But I do see some implementation issues, not entirely unconnected with the name. This platform I’m talking about isn’t called something sexy like Jazz, it’s called something really ugly from the people who thought that MQ Series was a sexy name. It’s called “AD/Cycle“. Oh, and there’s another little problem, besides the ugly name—that GC26-4531-0 manual is dated 1989 (and, yes, I was around then, and I do have piles of old manuals that old that prove it, although I’m in the process of throwing them all away).

IBM Rational Software Conference (RSC)
However, I really was at IBM RSC and as it happens I do think that IBM’s 2009 Jazz platform has a lot going for it. I just can’t help remembering that I once thought that AD/Cycle back in 1989 had a lot going for it too. I remember talking about a tool that would animate requirements models back in 1989, so that business users could work through their user stories with their developer and any issues could come up really early on, when they were really cheap to fix. I also remember suggesting that the UI should look a bit like the back of a fag packet [aka cigarette packet, a cardboard box used to hold popular but lethal drugs in olden days, much used for roughing out plans and designs over a pint in a pub], to make sure that the business didn’t mistake a simulation for the real thing.

So, what’s different now? Well, IBM has learned something about expecting everyone to buy into a vendor-owned platform—Eclipse is a lot less threatening than SAA back in 1989 and although Jazz isn’t Open Source (and isn’t likely to follow the Eclipse model anytime soon) it doesn’t make you use a mainframe repository and expect you to use OS/2 workstations. The technology is more powerful now too, with enough spare CPU cycles to make the platform designers’ vision of a smarter planet for systems developers something like a reality—anybody else want a 3-d back-of-a-fag-packet UI metaphor?

Then again, we have UML 2 as a lingua franca. Back in 1989, the shape of the boxes in your diagrams was more important for most people than the semantics of the solution. And we now understand that process isn’t an end in itself but a means to delivering business services—”just enough” process is OK, and perhaps we are now allowed to value people over process and tools (that certainly wasn’t true in the banks where I was working around 1989/90).

Maturity
However, what I think is really different now is the general standard of maturity. It’s no longer a yes/no binary switch—”you’re all rubbish and the computer can do your jobs“; alternatively, you’re some sort of superprogrammer heroes and everything you do “just works“—largely because you have a lot of unheroic minions to clear up after you; and the work they do isn’t counted as a cost. Maturity is now seen in terms of an improvement journey.

Most people (well, over 50.0001% of people anyway—caution, entirely made-up statistic) now recognise that unless you know what you’ve got, where it is, how it’s configured and who can use it, everything (from security to automated business) is built on sand. And that knowing what you have and where it is is a useful goal and much more achievable than aiming for a perfectly proactive metrics focussed culture where everything is fixed before it goes wrong—although also having “improvement” as a goal might even get you to that nirvana in the end.

I very much hope, with some anecdotal evidence in support, that both hero and blame cultures are starting to disappear. Hero cultures must go because they are high risk. It’s always possible that your hero is subconsciously making mistakes because fixing them at 03:00 in the morning is such an ego boost and so rewarding. And eventually heroes burn out or their family won’t let them come in at 03:00 am any longer.

And blame cultures must go because they kill any chance of improvement stone dead. If you don’t review what you’ve done for what you can learn from it, you’ll never improve—and who’ll risk a post-implementation review if you might get the blame for any issues you discover. Besides, you can’t show anybody that you’ve improved if you don’t take a baseline first—and in a blame culture, who wants to take a baseline?

CMMI
Nevertheless, I think we are maturing and the idea of saying what we’ll do, doing it and then measuring our success (or lack of it) so we can do it better next time is catching on—and, I think, we largely have the SEI’s CMMI initiative to thank for this. No, I am not suggesting that everyone is, or even should be, undergoing Class A Appraisal at CMMI ML5 (Maturity Level 5, the highest level), but I do think that CMMI has influenced IT culture and introduced the idea of “process maturity” as a journey to the IT comunity in general.

First of all, you discover what you have, where it is and who is responsible for it. Then you look at the gaps between where you are and where you want to be and, perhaps, identify processes that seem to be working well for particular teams—so other teams can discard processes that aren’t working well and adopt ones that are. You look at what you do relative to an accepted industry framework (which might be something with a process like ITIL or something higher-level and process-agnostic like CMMI) and decide whether you want to move your whole culture towards a more agile way of delivering automated business services—or whether a lot of your processes are “good enough” and you only need to improve certain key processes (or institutionalise existing good processes across the whole organisation). What I’ve just described is very much progress to CMMI ML3, I think, whether you call it CMMI or not.

ITILv3
As evidence that CMMI has legitimised that way of thinking for the industry in general, I’d cite initiatives like TMMI, the testing maturity model, and the fact that ITILv3 introduced a “Continual Service Improvement” volume, As it happens, I’d maintain that formal CMMI is a useful tool to help organisations adopting ITILv3, say, assure themselves that they can really do what they say they are doing and thus really achieve the benefits they anticipated from ITILv3—but informal Class C appraisals at ML 2 or 3 would be quite adequate for that. I also think that that learning a bit about CMMI and applying what you learn to your internal process improvement initiatives (without mentioning CMMI, if you like) might be a good-enough starting place for many organisations (you can always revisit formal CMMI when, or if, there are business reasons to do so).

Nevertheless, I do like ITILv3 in itself, partly because of its new-found service improvement model. I see it very much as a pragmatic and practical good practice guide for organisations that don’t have a lot of formal process and want some, perhaps to satisfy governance requirements, and it is full of practical stuff you can borrow and adapt. CMMI is at a higher level and is process agnostic—it’s for people that have effective procedures and processes and want to improve them and perhaps identify and address gaps in the range of processes the organisation has. I personally can’t see much point in adopting both formally—invest in ITIL or CMMI and steal what is useful from whichever you haven’t adopted formally. Although, that said, CMMI practitioners at least are thinking about joint appraisals, since adopting ITILv3 effectively ought to satisfy a lot of the CMMI requirements, especially at ML2 and 3—but that’s in the future.

MCIF
However, going back to the IBM RSC, there’s a new kid on the block, which also delivers process improvement, but which (I think) is very much at the ITIL level, complementing rather than replacing CMMI. This is the MCIF  (Measured Capability Improvement Framework), billed as “a systematic approach to software excellence“. It’s based on IBM’s experience of moving a major software development organisation towards “Agile at scale” In other words, Agile at Scale is a disciplined Agile approach that encompasses very large distributed teams delivering very large commercial software projects as well as the little teams delivering little bits of useful business functionality and IBM ought to have gained knowledge from implementing it that could be useful to other organisations.

MCIF has 4 phases:

  1. Elicit and set business value objectives (business requirements, scope and development approach).
  2. Determine the solution components (incremental improvement roadmap and financial analysis).
  3. Accelerate and monitor solution adoption using Rational’s preferred approach (deploy existing Rational and other development tools, adopt a usage model to maximise ROI, measure and adapt)—and no, Clarice, this isn’t a completely process-agnostic approach; but lots of people do choose Rational tools.
  4. Review and communicate business results (compile documentation of business value and results; and conduct a post implementation review with the stakeholders, to identify areas for improvement).

To my mind, MCIF adds an extremely useful focus on the delivery of business results to the usual application delivery process that is entirely complementary to both CMMI and ITILv3, so, if it suits you, it is a good thing. After all, how many IT organisations are currently delivering business results to their business stakeholders so effectively that they’re in a position to criticise MCIF? Which, to my mind, reads like common sense, especially for organisations already committed to Rational development processes.

IBM is delivering tools and documentation for MCIF as part of its  Agility at Scale approach, including outlines of an Executive Business Value Workshop for Phase 1; a software delivery Health Assessment for Phase 2; solution content, agile practices in RMC (Rational Method Composer and deployment services for Phase 3; and IBM Rational Insight for tiered performance measurement in Phase 4. IBM plans to extend this into the systems domain (that is, to systems engineering and embedded software practices) in the near future.

Where next?
So, we’ve had a brief gallop through underlying process improvement and maturity initiatives, to help explain why I think development automation might fare better in 2009 than it did in 1989. Now what about 2019? Well, I’m too old to pontificate on development in 2019 until 2020 (making the assumption that I’m not too old to be around by then) but I was interested to hear Dr. Danny Sabbah (general manager of Rational Software, IBM Software Group) at RSC, talking about a development initiative he’s involved in for a big American city. Here, the focus isn’t on business results but on social results—on developing systems in such a way that society as a whole gains benefit. Now, perhaps that is a welcome glimpse of a possible future…..