Testing is broken

Philip Howard

Written By:
Published: 2nd March, 2015
Content Copyright © 2015 Bloor. All Rights Reserved.

There are several problems with testing. The first is that outsourcers, systems integrators and so forth have little interest in the adoption of testing tools that might help to speed up or automate testing processes, because they are body shops. Better tools mean less bodies to sell. The second is that developers and testers tend to think that if they aren't hand coding and working with a command line interface then they aren't "real" developers: Too much "not-invented here", and, "it may be good enough to automate business processes but we're above that sort of thing" they claim. The third problem is that licensing testing tools is a capital expenditure and hiring bodies is an operational cost, so the bean counters are the third barrier to improvement.

The net effect of these attitudes is that there aren't many vendors specialising in the testing market and there are even fewer who are being innovative. There are too many incentives to maintain the status quo, which plays into the hands of the body-shoppers and hand coding junkies.

Consider, for example, manual testing. This is defined by Wikipedia as "the process of manually testing software for defects. It requires a tester to play the role of an end user and use most or all features of the application to ensure correct behaviour. To ensure completeness of testing, the tester often follows a written test plan that leads them through a set of important test cases". For example, you have to manually test every possible combination of entries that a user might make while working in a browser: what happens if you click this button followed by that one, or vice versa?

If you haven't ever done manual testing it is easy to see how boring and time-consuming it must be. Do this, do that, note results. Do something else, note results. And so on. And on. Often for weeks or months at a time.

However, it's not just the manual execution of tests that eats up time and budgets. Consider the test cases that manual testers are testing against. In the vast majority of companies these test cases are generated manually too. And even when that's not the case, the majority of the tools on the market are either too complex or not sufficiently complete. For example, pairwise testing takes no account of real-world constraints.

Nor is this all. Management of test cases is, in most cases, non-existent, and old test cases never die. In addition, duplication of test cases is rife, which is another example of the not-invented here syndrome: why reuse an existing test case when you can build your own? And when it comes to actually performing testing there is, again, too much duplication of testing, over-testing of the wrong things and under-testing of the right things.

The bottom line is that testing is broken. It needs far more automation. Every other process in the business is subject to automation: why not testing?

Readers' comments

There have been 6 comments:

  1. Jon Freeman said on :

    "Spot on! This like all technology driven change starts out slow then accelerates as early adopters show the productivity and economic improvements. Also, the reason I came to work for TurnKey Solutions - the chance to engage a change needed by any business using software - and, following a long line of similar challenges in my career. Thanks for the essay, it helps shine the light on a problem - we must remove the ignorance of the problem 1st, and you are helping with that."

  2. Paul Seaman said on :

    "I think if you want feedback on this article you should pop across to the following Linkedin forum.

  3. Lars Boelen said on :

    "Interesting thoughts. I think that manual test can and should comprise much more than "doing what end users do but than for days on end" but it is an important task of course. That is why the software test manager from Bstriker.com uses advanced automation: scripted testcases can be run and validated with the click of the mouse, passing runtime parameters on the go (making testscript flexible). Using a testmanegemnt tool will ensure no cases or platforms are missed and the efficiency of the automated tests frees up time to implement much needed other testing techniques to improve quality. Check them out: BStriker.com's BShape!"

  4. Neil Price-Jones said on :

    "I agree that it needs more automation but we need to get out of the 'more is better' syndrome first and plan our way to better testing. It will not happen with just automation or even with manual testing. It is proper determination of what is best that will solve the problem."

  5. Janet said on :

    "Testing is broken if you are still doing it manually. With all the free software out there to make automation easier there is only one reason not to automate - skills. Get your developers writing unit tests, designing code for testability and hire SETs that can automate at the Web Service/API level. Grow your team so that can automate and use non-technical testers to write Consumer Driven Contracts or BDD models. Stop or reduce system testing as it is too slow and brittle to automate. Read the book "building microservices" by Sam Newman"

  6. Chris said on :

    "The value of automation varies HIGHLY depending on what you're testing. If you are testing device firmware, where a test harness can communicate directly with the firmware, it's great. For testing a software back-ends response to API calls, it's dandy. For testing the 35th iteration of software that has minimal UI changes, it's reasonably good (provided you have the correct hooks in place for automation).

    When the testing is primarily UI driven, and the interface is in flux, you will spend more time rewriting automated cases then you would on manual testing. Testing on mobile devices adds even more automation challenges, as they're generally designed to prevent remote manipulation of code (and testing on an emulator is vastly different than execution on a real device).

Post a comment?

We welcome constructive criticism on all of our published content. Your name will be published against this comment after it has been moderated. We reserve the right to contact you by email if needed.

If you don't want to see the security question, please register and login.