There is a serious governance problem with most IT developments—the code, which is a model of the business implemented by the automated system, doesn't match the requirements model that the business agreed to. There always is a requirements model, even if it is just something floating around in several heads. But the situation is often even worse than that implies, because these 'virtual' requirements models may not agree with each other and may themselves may have little relationship to what the business thinks it wants, in the real world. Of course, if you are lucky enough to have a written requirements model, it's probably ambiguous and incomplete—and probably also out-of-date.
The key to addressing this governance issue is automated testing—not just testing the code (although that isn't a bad idea, of course) but testing the requirements model in an electronic form. The most cost effective use of testing resources is when you test the conceptions—or misconceptions—that the major stakeholders in the automated business have around their new system. You can then remove misunderstandings that people have about the way the world works—possibly months before you write any code (and, thus, before you waste resources, possibly, on the wrong code).
But, we know all this and have known it for half a century or so. I've just had an interesting conversation with Huw Price, MD of Grid-Tools about why we persist in doing it wrong and what we can do about it.
It's largely a cultural issue, we think. Model driven development is fine for people who like it; people like computer scientists, who are fully aware that checking a model for completeness and consistency—and for muddled thinking—is an extremely cost-effective way of removing defects from a system, well before you start wasting time on coding solutions to the wrong requirements. However, lots of people who aren't trained as computer scientists or systems engineers and aren’t building things like computer-operated aeroplanes, simply like coding more than they like modelling, and regard coding a solution several times until it addresses the right requirements as just more fun than mucking about with pictures.
Unfortunately, the businesses who have to pay programmers while they are writing the wrong code are unlikely to be so happy with this situation. And ambitious programmers are starting to realise that acting like technology gods and wasting the business' money isn't good for their careers any more. So, perhaps there is now hope for a different approach, which defines requirements in terms of test cases (that fits well with agile programming approaches)—and encourages developers to get the specification right, up front.
Grid-Tools is promoting its Agile Designer, which claims to "reduce the creation of software defects by up to 95%" although with my tester's hat on I have got to say that's not a "requirement" that's going to be hard to satisfy (no reduction at all in software defects satisfies the test). That aside, however, Agile Designer is a tool which has maths to calculate all logic paths through a design model and generate something like the minimum set of test cases needed to cover the system. The focus on test cases that developers can actually use should make the difference to programmer acceptance of this tool, although the proof of this is going to be in the actual user experience stories. The tool looks good to me, after a brief overview, however; and there are already some enthusiastic users. Of course, the reaction of hoi polloi in development, with little interest in anything beyond coding, will be the ultimate test, and we don't really expect to find this kind of developer amongst the early adopters.
There are other design modelling tools, using powerful modelling languages such as UML and SysML. This one differs in making a real attempt to be programmer-friendly, by using an enhanced flowcharting model. You produce a 'storyboard' for your test cases (every design requirement must be testable, or it isn't much use) by linking decision boxes together (using true or false outputs) into a flow chart and then simply click a button to get a choice of optimal test path designs, covering all ‘happy’ and ‘unhappy’ paths (including loops), in the smallest number of test cases. This is well worth evaluating in your own environment, we think: a free trial download is available from its own website, http://www.agile-designer.com/.