Just what is risk based testing, anyway?

Written By:
Published:
Content Copyright © 2016 Bloor. All Rights Reserved.
Also posted on: Accessibility

Risk based testing is rapidly becoming in vogue in the automated testing space. Although risk based testing looks good on the box – the principle idea of ‘test the most important systems most thoroughly’ is certainly sound – in my opinion, this trend is worrying, for basically one reason: no-one can agree what risk based testing actually is.

Everyone seems to be offering it, but, in particular, everyone is offering their own version of it, and often these versions are vastly different (sometimes to the point of not sharing any similarities at all). ‘Risk based testing’ is not emerging as a new practice so much as it is becoming a new bullet point on a list of features that test automation products must have for the sole reason that everyone else has it. The perils here should be obvious: not least the fact that the vendor and the client may have very different ideas of what constitutes risk and risk based testing.

Most risk based testing could uncharitably be described as guesswork. Someone, somewhere (probably a manager, maybe a team of managers, in some cases even a tester – the latter even being encouraged by some offerings) assigns a numerical value of risk (or priority, or criticality, or business impact, and so on, and so on) to each test or requirement in your testing framework. This value is (depending on the particulars of the framework) used to decide which tests should be run most often and which requirements should be covered most thoroughly. Unfortunately, this kind of ad hoc numerical assignment amounts to little more than a finger in the wind. Having said that, this method, although arbitrary and not in the least scientific, is at least transparent: everyone knows where the risk is coming from and how it is generated. Sadly, not all frameworks can meet even this low bar. These aforementioned frameworks are just arbitrary and ultimately rely on one or more manually entered figures. However, they obfuscate that fact by using an opaque, complicated but still completely unscientific formula to calculate risk. They can also introduce the additional problem of having to define very similar, if not the same, fields in different places.

To be completely fair, not all risk based testing is this terrible, and in fact several companies are at work developing more analytical approaches. These tend to work by analysing the defect rates in tests that have already run and calculating from that which requirements are the most fragile and prone to breakage. These requirements can then be tested more often (on the basis that they’re most likely to break). This is actually fine and good: the problem is that, in the minds of most people and most clients, risk refers to the most critical parts of the system, not the most fragile, and they are not necessarily the same thing – in fact, one hopes the exact opposite. Rebranding this sort of testing as ‘defect based’ would do wonders to correct this sort of error in communication and distinguish it from the less analytical approaches mentioned above.

To conclude, notice that, if a product offers ‘risk based testing’, it’s not really communicating anything. Although of course some variance is expected, it encompasses two broad approaches that could hardly be more different. In most cases, ‘risk based testing’ barely means anything at all.