Not only does it cost money in terms of re-work but it can also adversely impact morale if for internal use and can adversely affect your reputation if exposed to public gaze. In the latter case, it can directly hit your bottom line if, for example, you have to take down your website while the problem is fixed (cf. National Westminster Bank: according to the Daily Telegraph “in June 2012, a failed software update locked millions out of their accounts for up to a month”, and, more recently, the £3m fine incurred by EDF when, to quote the Times: “a new system for handling calls went into meltdown”).
The reason why web sites have to be taken down and why application software does not do what it is supposed to do is because there are defects in the software. These defects should have been detected and fixed before the application was allowed to go live. Why didn’t that happen? It didn’t happen because the software was not tested properly. Why wasn’t it tested properly? Typically, the answer to this question is twofold. In the first place, development is under pressure from the business to get the new application up and running as soon as possible—ignoring the old saying about more haste, less speed; and, secondly, because of budget constraints in the IT department, which means that it is easier to hire more (temporary) testers to do a rush job than to get approval to license the sorts of tools that would make the whole process of testing more efficient and accurate, and which would automate a significant part of the testing lifecycle.
In this paper we are not focusing on testing per se but rather on automating the testing process. It is there where we believe that the greatest savings and efficiencies can be achieved.