Testing is all too frequently treated as a series of unrelated processes: you have some code to test, you design test cases, write test scripts, define the profile of the data needed for your tests, identify where that data is, and describe the expected results. If the data is not easily available you may have to use the facilities of a service virtualisation tool in order to capture and/or simulate appropriate data.
In any case, these steps are typically considered as a part of a single process that is isolated from other such processes. Needless to say, test cases and their associated test components are typically stored for potential reuse but how much reuse really goes on? Of course, this has been a bugbear in development circles for decades: everyone recognises the theoretical benefits of reuse but making it happen is another matter entirely. However, it is potentially easier to implement in testing than it is in development. This is because test cases can be generated directly from requirements whilst that is not generally the case for application software.
The key point to supporting reusability in a testing environment is software that will identify which test cases (along with the scripts, data and expected results) are relevant to the particular software being developed. Following identification, the software should be able to scan an existing library of test cases to identify if a test case already exists and, if not, create and store it for future use. In other words, reusability needs to be automated: simply creating a library of potentially reusable test components will not be sufficient because we know that human nature means that it will not be properly utilised. Worse, you end up with more and more test cases, which makes the identification of reusable components even more difficult, meaning less and less reuse. So, test case management needs to be automated.
However, it isn't simply a question of reusability for new test components; you also need to cater to the fact that there will typically be (tens of) thousands of existing assets. These will need to be scanned by the test case management software so that you can identify both duplicates and out-of-date test components that are no longer valid. It would probably be sensible if you could identify where test cases were simply versions of an underlying, more fundamental test case. In any case you need software to help you perform governance against your existing test assets. If you were running this in stand-alone mode you would then want the ability to compare any new test case with what already exists. However, in a truly automated environment you would want the software that captured your requirements to automatically look for relevant test cases in your repository, only generating new test components if these were not already available.
In practice, the total automation described is not available, but this is the direction in which the market is, and should be (in our opinion), moving.