Testing Summit

Written By:
Published:
Content Copyright © 2008 Bloor. All Rights Reserved.

Many years ago when I first met Mercury (the Business Technology Optimisation people), it told me that its major competition was from people who didn’t do formal testing. When I ran into Colin Robb of HP (Mercury’s new owners) at last week’s Test Management Forum (TMF) Testing Summit at the IoD in London, he said much the same thing; as did Dan Koloski of Empirix. It’s not that people don’t test at all (I hope) but that they test inefficiently or ineffectively. If you test manually, you get through fewer tests with the resources available. Worse, if you don’t take a structured risk- or cost-based approach to testing, you are probably happy when you fire up 100 tests and your program passes every one—whereas a real tester regards that as a waste of resources, since it is unlikely that your program is really bug free, so 100 tests that don’t find one of your bugs are 100 wasted opportunities. However, without automation, keeping track of your structured testing plan is hard and reusing test cases efficiently is difficult.

So, some things don’t change. Lots of people don’t test effectively because they haven’t been trained in structured testing. We once tried to persuade our boss to buy Myer’s book on structured testing for our programming teams; “I can’t do that,” he said, more or less, “those guys are working unpaid overtime on firefighting production bugs; I can’t ask them to give up what home-life they have left, to read books on structured testing”—go figure. Even more people aren’t testing properly because manual testing is hard work and their company won’t buy tools which could help—or because the money/time/resources have run out before testing starts and the deadline is non-negotiable (and no, you shouldn’t be running all your testing as the last barrier to deployment—but some people do).

But we were at the Testing Summit, sponsored by Sogeti, nFocus, Insight, HP, SQS, Empirix and Experimentus, to find out what was new in Testing. And we must just say that our hosts, Paul Gerrard, of Gerrard Consulting and his team, did an excellent job with the summit—we’ve seldom had such a good interactive experience at a technical conference. If you want to attend the Summit or associated quarterly Forums, please register through the uktmf.com website

What is new, then, is an increased recognition of the importance of a holistic approach to testing and, perhaps, recognition that it’s assured “business outcomes” that matter, not just an absence of coding errors. One attendee even reported that the term “Business Assurance” was being used instead of “Testing” in his organisation. The old silos are disappearing all over the place—CA, for example, wasn’t at the Summit but is also interested in integrating mainframe application QA and the monitoring of production web application performance as part of the same business governance process.

Picking out just a few of the speakers for a detailed report, Geoff Thompson of Experimentus led a spirited discussion on “Test Process Improvement and the Test Manager”. The general conclusion we came to was that testing process improvement was important but often undervalued by the companies involved (which might make testing process improvement a poor choice of career). Process improvement in technology areas is harder than many business managers think, partly because you can’t improve one process in isolation—improvements in the testing process may involve the implementation of a better requirements management process and the business focussing more on defining the “business outcomes” it is expecting. In other words, process improvement in one area often catalyses process improvement elsewhere—and then politics and vested interests become considerations.

Thompson is involved in a test process improvement initiative that deserves wider recognition: the TMMI foundation (TMMI is Test Maturity Model Integrated, by analogy with the SEI’s CMMI). TMMI came about because the CMMI maturity initiative doesn’t really cover testing fully; although now the work in formulating the model is more or less done, the SEI might even take it on.

One issue with testing is a shortage of really talented testers and Paul Gerrard led a discussion about transforming business people into testers, to help address this skills shortage. There was a general feeling in the group that the testing industry needed to communicate the career path for testers better. Clerical workers moving into testing can probably double their salaries—and a lot of computer hobbyists probably already have usable experience in finding errors in software. A new re-training company was launched at the Summit, Aqastra, which has the specific mission of re-training business people to become testers.

There was also some discussion of testing as a career path for currently unemployable people with “autistic tendencies”—who are often otherwise very bright and obsessive about detail.

Colin Robb (Worldwide Product Marketing (EMEA), in HP Application Quality Management) talked about security testing. This seems to be a new focus for HP following its SPI Dynamics acquisition.

It seems that some people in the testing industry are not particularly aware of security testing (some attendees were unaware of, for example, SQL Injection; and hadn’t thought about the importance of “non-functional requirements” for security arising from, for example, Data Protection regulations) although some people from traditionally security-conscious organisations such as BACS and the BBC were more aware of the issues. Security testing seems to be “silo’d” with lots of possibilities for defects to slip through the cracks. For example, GCHQ defines “pen testing” (penetration testing) as very much limited to hardware-based exploits; but the general impression with developers is that it also involves application security and social engineering, which means they may assume that professional pen testers have done more testing than they have.

There was some discussion of a proper Security Policy as the “spec” for security testing to test against—but quite a lot of organisations don’t have a formal security policy at all and others don’t have a structured, hierarchical policy connecting abstracted generics with specific physical controls. We made the point that an application defect which incidentally allowed someone to move funds illegally without an audit trail was also a security defect—would the application testers handle it or the security testers—or neither, because each thinks it’s the others’ responsibility? A holistic “application assurance” approach to testing really is a good idea if you want to be confident in the “business outcomes” developers are probably paid to deliver.

Dan Koloski (CTO and Director of Strategy and Business Development at Empirix) led a discussion of collaborative performance testing. In essence, his thesis was that performance testing has to complete in a short window between the completion of the code and its deployment; so a collaborative team including developers, DBA etc as well as testers can actually fix faults in real time as they are discovered and make the best use of this window. This seemed to attract general agreement, although Koloski points out that “it is not the standard operating practice in most organizations today”. However, the issue of “silo’d” testing also came up again (perhaps the performance testers could usefully add their insights to the application design process, if they were involved earlier) and the need for early identification of potential bottlenecks was noted. Analysing the architecture for potential bottlenecks (according to Koloski, 75% of performance issues are platform-related), for example, can start much earlier than performance testing proper.

Finally, before we all repaired for a very nice dinner, Paul Herzlich (Principle Analyst at OVUM) reported a survey (commissioned by the TMF) of perceptions of testing from the Summit mailing list. This got a few very odd responses (lots of people claim to use “formal proof”, for example, which is very unlikely to be true) but a full analysis will be available soon. The general view on an initial look was pretty positive both about testing itself and its place in the organisation. Nevertheless, we were left with the feeling that perhaps a lot of potential respondents, test managers actually running teams at the coalface, genuinely hadn’t had time to respond and might have been rather less satisfied with the status quo if they had! And Herzlich says that, “if the survey did show up a weakness—a challenge everyone wants to attack—it was in the area of automation”, an issue we’ve already mentioned.

The next Testing Summit is on 30th Jan 2009 and CAST (the 3rd Annual Conference of the Association of Software Testing) is being held in Toronto, July 14–16, 2008.