Responsive automation

Written By:
Published:
Content Copyright © 2015 Bloor. All Rights Reserved.
Also posted on: Accessibility

 

All testing environments have to be able to react to changing user demands. Moreover, it is typical that change requests are both frequent and never ending. The issue arises as to how you react to these requests in a timely and efficient manner. The short answer is that you need to reduce manual testing, increase automation and do so in a way that allows you to be more responsive to change. However, it is easy to say this, much more difficult to realise in practice. The question is: how can automation enable responsiveness?

To turn this around: what are you actually looking to achieve? From a testing perspective on application change requests, what you would like in an ideal world is automated derivation of all the test cases you need to ensure adequate coverage, generation of the relevant test scripts, and the automated provisioning of appropriate data to run against those tests. In fact, if we really want to be idealistic, you would like this to be a one-click process. And this isn’t entirely blue sky thinking. It is not difficult to imagine artificial intelligence and machine learning capabilities being built into test automation frameworks that start to move testing in this direction.

However, we are not there yet and, in the meantime, at least some degree of manual intervention is going to be required. The question is, therefore, how to minimise this requirement? And the first part of any answer to this conundrum must be that requests for change, and the details thereof, are captured in some sort of formal manner. There are actually two (perhaps three) considerations here. Firstly, the definition of the change requirements should be directly usable at the start of the automation process. Secondly, the process of capturing these requirements needs to be understandable not just to developers and testers, but also to the business users that are commissioning the changes. If this is not the case then there is too great of a risk that what the developers are creating will be different from what the user wants. Thirdly, preferably, this whole process should be easy to use and not require detailed training.

The key is the first point: changes are formally captured. Software should then identify what test cases are required to validate a change made to an application and search the existing library of test cases to see if suitable test cases already exist and, if not, to generate new test cases to be stored in the library for future use. Notice that this implies some sort of test case management software. If suitable existing test cases exist then they should have test scripts already associated with them, along with profiles of the data required to run those tests and links to where that data resides. If those test cases don’t exist, then you want the software to generate the test scripts and data profiles at the same time as you generate the test scripts.

Put all that together and you genuinely have the ability to be responsive to change.