IBM Rational Software Conferance – Booch on Day 3

Written By:
Content Copyright © 2009 Bloor. All Rights Reserved.
This blog was originally posted under: The Norfolk Punt

The highlight of the third day at RSC was Grady Booch’s roundtable discussion session as usual. He expressed a certain disappointment in today’s modelling tools—which are a long way now from the simple visualisation tools originally envisaged by the 3 amigos. With respect to UML 2, he apparently thinks that:

  • Quite a lot of it is now programming/engineering oriented, and he isn’t that keen on “high ceremony process”, except, perhaps, for safety-critical applications.
  • It displays the “2nd system syndrome”; it’s designers got right this time what they got wrong before, rather than, perhaps, getting right what we will need now.
  • It lacks deep consistency (it’s rather “designed by committee”) [although I’d think that the MOF metamodel must help]. Grady agrees that it has a semantically more precise metamodel now, but thinks that this has bought added complexity; “it needs refactoring”.
  • Satisfying model-driven development use cases adds complexity, at the expense of pure visualisation

Grady contrasts what we need for “forward engineering” with what we need for “reverse engineering”—which he is very interested in these days. “We need a ‘systems grokker’ to help us mine architecture from legacy code” he says.

That does make a lot of sense as much of our systems automation has already been done at the fundamental level and recoding business logic from scratch when it hasn’t changed, just because a services or Web presentation is needed, is awfully wasteful.

Grady points out that “a subset of UML is often enough—perhaps the OMG should look at a UML Lite? It could find out what people are actually using in UML and put a box around it…”. And, of course, UML is designed to enable the production of “well-formed subsets” simply by sub-setting the meta-model. I suggested that SysML was already, to some extent, a refactoring of UML 2 (especially around constraints and the fearsome Object Constraint Language) and Grady seemed to agree: “SysML has some nice constructs for forwards engineering—but do you need them for reverse engineering?”. In the case of constraints, I’d probably say yes—but what do readers think?

I was interested in Grady’s view of Enterprise Architecture and standardised architecture frameworks. I think he’s a bit worried about tying this down too early—and we haven’t yet reached “critical mass” for holistic modelling of enterprises. He points out that innovation came out of last century’s methodology wars and innovative ways of visualising and using architecture in automated systems development are needed now (for a start, he thinks, the DoDAF framework could be refactored—perhaps UPDM is a chance to do this).

We need a delivery architecture which concentrates on incremental and iterative delivery of executables, he thinks; although I’d perhaps add that delivering the “right” executables is important, and that a process doesn’t have to be automated for documenting it to be useful (it enables improvement by refactoring, for example). However, “delivery architecture” is just another way of expressing the idea that an architecture must be “actionable”, I think—pictures of your business/technical operation that can’t be transformed into the day-to-day execution of your business rapidly lose touch with reality and become an expensive and useless overhead.

He sees the Open Group’s TOGAF as a more “systems theory” (organisations as analogues of living things) view of architecture, as opposed to the more technical MODAF and DoDAF views. And, “security by design” is very much an architectural issue—when the world runs on software, software architectures must be inherently, architecturally, secure. Security is not something you can bolt onto an insecure architecture—at least, not without a lot of expense, wasteful rewriting of old code, compromises and upsetting your existing customers who’ve come to rely on insecure features of your system (as certain operating system companies have found out). And, then, how secure is the result?

We must be careful to avoid only solving yesterday’s problems. For instance, the “smarter world” IBM envisages, with widely distributed intelligence in everything we use, may need radically new architectures and modelling techniques. Nevertheless it’s worth noting that a very few discrete architectures have sufficed to describe what we have so far, so perhaps they’ll suffice for the future too (good architecture is more resilient in the face of change than any physically instantiated system, I think).

IBM’s Watson project (a computer program to play as a contestant in the Jeopardy quiz show—it needs to be able to answer ad hoc and non-straightforward questions) is an example of what we may see emerging in the new smarter world and it has shown that it is sometimes very hard to model things you’ve never thought of doing before; sometimes you just have to “suck it and see”.

An interesting sideline on security came up around this discussion. One of the other analysts present writes M&A news stories on the basis of the sort of “covert channel analysis” he learned while working for the spooks. In other words, by correlating non-secret activities and communications he can, with some level of confidence, infer secret information—an unintended “covert channel” exists by which information can leave the organisation. It’s a well-known and valid technique, although it’s usually described in terms of intentional channels used to hide information transmission or in terms of lower-level physical correlations (such as power usage) which can be used to leak technical information such as passwords. However, if you can monitor increased communications between 2 parties (not viewing any content, just traffic analysis, and perhaps congestion/response time on the company websites might be a metric) and, possibly, travel patterns and press coverage, it seems reasonable that you could predict, within statistical confidence limits, the existence and date of a merger before it is announced. How many security conscious companies put that into their risk/threat assessments?

To finish, here are some Grady thoughts I liked:

  • On architecture frameworks: they deliver “the illusion of Architecture without the need to architect” anything.
  • On Vision: “Vision is a glut on the market. Only some vision is good”.
  • On Models: “All models are wrong, some are useful”—they (may) help us on a journey towards delivering real benefit.
  • On Obama: “He has a CIO and a security officer; he’s the first president to use a laptop”.

Finally, although he’s pretty laid-back,, some things do still annoy Grady:

  • Government acquisition processes for high tech stuff which tend to take so long that any chance of effective innovation is stifled and which aim for the “lowest tender” beyond all else.
  • And the woeful lack of completely secure [or, at least, properly risk managed] systems—we still routinely build systems without designing in good architectural security from the start.

As I look around the IT landscape in the UK (the computerisation of the NHS and the way on-line consumer banking works, for a start), I guess they annoy me too.

Blog Tag: RSC2009