Content Copyright © 2014 Bloor. All Rights Reserved.
This blog was originally posted under: The Norfolk Punt
I have been writing about the idea of writing in an environment that makes putting defects into programs much harder—see, for example, a series of articles in The Register, back in 2006, here. Now, I have noticed some developments around Ada that make this worth revisiting, perhaps in a more general business context.
The basic idea is that where software defects are unacceptable (in military control systems, aerospace and similar applications) then it is worthwhile to use formal proof to eliminate a whole class of defects (obviously, you can’t remove all of the possible defects like this—specifying an incorrect requirement might be a bit of a killer) and to use a subset of Ada, say, which doesn’t allow you to code unsafe constructs. The increased training and extra effort in writing the code is justified by the savings from getting it right first time, by a reduction in expensive (and somewhat unreliable) testing processes, and by the savings in passing regulatory certification more easily. In many systems, you needn’t build all the code to these standards, just the core frameworks and key processes, and there is a healthy ‘safety critical’ niche where such practices are routine.
Now, the scope of what is considered in need of bullet-proof reliability is increasing. People are realising that the failure of, say, financial risk management and control systems can put banks out of business and even precipitate a world financial crisis—and this can actually kill people (suppose a pensioner’s pension pot disappears). Arnaud Charlet, the AdaCore technical director, also product manager of the CodePeer and SPARK products, tells me that the Japanese automotive industry is becoming increasingly interested in formal methods—recent recalls over defects in car braking systems (which are increasingly software-controlled) and the like have been catastrophic to its hard-won reputation for quality. He also tells me that ARM and other more general developers are expressing solid interest, as certification (e.g. to the automotive ISO 26262 standard) becomes increasingly important; and points out that a new interest in software warranties may be a driver for better development practice.
There is quite a lot more to SPARK 2014, of course. It has been extended to accommodate more of the structures and techniques most programmers like to use—without compromising ‘safety’. It has some really useful tools (such as AdaCore’s CodePeer static code analysis), links to 3rd-party tools (such as Valgrind dynamic code analysis) and so on. In fact, the most cost-effective way to build a reliable C++ program (if that isn’t an oxymoron) is probably to build it in SPARK and cross-compile into C++ (although I really can’t see why you wouldn’t stick with Ada 2012).
So, why isn’t this stuff more used in business systems? Well, a lot of it is cultural. If you call a SQL Injection vulnerability, for example, a “bug”, or an “exploit by a genius hacker we couldn’t possibly have anticipated” then the business users may put up with the occasional SQL Injection exploit, because “you can’t fix everything”. Once the business realises that SQL Injection is just the result of shockingly bad programming by an unprofessional, or badly trained, or incompentent programmer, perhaps tolerance for it will decrease. And then general developers may begin to see some merit in low-defect programming, with SPARK perhaps, where it is appropriate.
You get defects in systems because you ‘want’ defects in systems. You show programmers that you’ll tolerate defects by rewarding programmers for delivering early and fire-fighting the consequent production problems; by not involving all stakeholders early enough in the development process, and by not investing in training, environments and processes which discourage putting defects into systems in the first place. In a world that runs on software, this is increasingly going to be seen as unacceptable. I’d see tolerating software defects as a serious governance issue, which destroys the Trust needed if a company is to rely on software for its business; and also limits its Freedom to allow its employees to orchestrate its software assets in new and interesting (profitable) ways