Testing and impact analysis

Written By:
Published:
Content Copyright © 2015 Bloor. All Rights Reserved.
Also posted on: Accessibility

Supporting changes through a test automation framework is one thing but it’s not the whole story. Changes in themselves can have implications beyond their obvious scope. It is entirely easy to make what seems like an innocuous little change only to find that the whole application breaks. The risk of this happening tends to be proportional to the complexity of the application you are changing – the more complex the application, the more likely it is to collapse – the last straw on the camel’s back. 

This is one good reason to adopt a style of application upgrades that focuses on incremental upgrades rather than major releases: fewer, smaller changes are less likely to disrupt an existing system. However, regardless of the approach taken you would like to be able to know what impact any particular change might have on the rest of the application.

In principle, the knock-on effects of making a change should be captured and handled by the developers of the application in question but, in practice, this will often be left to testers. However, how do testers know what impact any particular change might have elsewhere? In practice, the simple answer is that they don’t. It is more or less impossible to catch unintended consequences if you are using manual testing methods because you won’t be able to see linkages across the application. There are many documented cases of companies implementing new systems where these have failed precisely because of unforeseen consequences. The most well-known are those that bring down company web sites for days or weeks, costing not just revenues but loss of prestige and, in some cases, fines.

Conversely, an automated test framework should be able to identify any implications of a change, provided that it has been used to capture the entire application with all of its requirements. Then, when a change is made to those requirements you should be able to perform impact and dependency analyses to see how these changes will have an effect on the rest of the system. If you are going to assess these manually then ideally they should be available in graphical format (we would recommend actually using a graph) as well as in a more tabular manner, to suit different users’ preferences. However, better yet, what you would like the software to do is to identify all the relationships and dependencies that are altered because of this change, and then generate (or retrieve from a library) all relevant test cases, scripts and so on. This should mean that not just the direct effects of a change are tested but also its indirect effects.

Of course there is a coverage issue here. Typically not everything gets tested. But this is because of the time and manpower required for testing, especially manual testing. Automation offers the promise of exhaustive testing. If you test everything then you’ll know that everything works. Test less than everything and you won’t.