Integrating test automation with a legacy environment - Even legacy testing can evolve with modern tools

Written By:
Published:
Content Copyright © 2023 Bloor. All Rights Reserved.
Also posted on: Bloor blogs

Integrating test automation with a legacy environment banner

Testing – or application assurance – what do I think about it? Well, that it would be a Good Thing, obviously, although I fear that it is often skimped, and that it should start as early as possible in the application lifecycle when fixing defects is cheaper (like, try testing the consistency and completeness of whatever form of “requirements” you are given).

This used to be difficult (there’s a perceived disjunction between Agile and rapid delivery) and expensive (although not nearly as expensive as finding defects in production). At the same time, applications are getting more complex – no more simple transaction processing where nothing changes until a whole unit of work is successful and “commits” – and operate at greater scale, so if something dysfunctional only happens very rarely, it still probably happens every day (or every hour).

Testing resources have to be allocated to manage the risk of failure, not to eliminate it, and manual testing of any but the very smallest applications simply doesn’t cut it. In order to gain assurance, you must automate testing as much as possible and exploit AI (augmented intelligence – humans assisted by machine inference) and machine learning (to recognise the areas most likely to contain defects, say) wherever appropriate – see my “Accelerating Software Quality” book review.

The essence of one approach to the extreme automation of application testing is to build a model of the application and explore its behaviour at the user interface level. Using tools such as intelligent optical character recognition, testing automation can recognise an interface screen and whether it has changed, which makes regression testing (for example) much easier. It also implies that any application can be tested, mainframe applications, embedded Windows applications, desktop applications, mobile apps and so on – whatever the technology, it is accessed via a user interface. Even an ATM can be tested, if you have a robot to press its buttons as directed by the testing tools; and Image Recognition means that you can test more complex UIs that aren’t text based.

Eggplant’s robust OCR and Intelligent Image Recognition approach,  which has been developed for over 20 years,  looks at the User Interface (reading pixels on the screen for images and using reliable OCR technology for text), not the underlying code. This eliminates any of the traditional issues, with many tools, in capturing and recognising objects. and makes maintenance easier – because if the underlying code changes, the test still runs; or if the text or image moves, the test proceeds if the pixels still match and text can be read.

There is a good overview of how this works in practice with Keysight’s Eggplant Test [here]. I have been talking about the implications of this approach for organisations with a significant legacy application investment with Elliott Veale (Senior Account Executive, EMEA) and Ethan Chung (leader for Solution Engineers across EMEA and APAC) from Keysight Technologies Inc.

Around 95% of users will end up testing at the User Interface level, but other approaches are available. Chung says:

If you can connect to it, we can test it. We can test on APIs by API calling, getting the data back, reading it intelligently. We have equivalent functionality within databases. Similarly, we give eggplant users an open API system. We can integrate with a lot of other testing functionalities. So there’s not really been a tool that I’ve seen yet that we’ve not been able to integrate with. So, with automation, you’re effectively converting manual testers into testing managers, right?”

Note the implication that you have more than one assurance tool. Static code analysis, for example, can get rid of a whole class of defects very cheaply even before you start “testing” as such. Why would you not use static code analysis routinely, on all code?

Veale agrees with Chung and points out that “because we’re not looking at what’s under the hood, our coding language [test automation coding is basically a specialised kind of software coding generally, with similar code management issues] is very, very simple – English like. We want to enable even junior or inexperienced QA people to not only build code but also maintain their own code so it frees up senior QA people to do more exciting things like CI/CD integrations, non-functional testing, and to start automation where they couldn’t automate before”.

Veale emphasises that Eggplant is particularly good at supporting end-to-end testing, so the same tool can test against both a web app and a 20-year-old legacy system. He says “No other tool can do this, the team usually needs to revert back to manual testing or find another automation tool which can automate the legacy system (combining the two tools together, with tests often written in different languages and frameworks, makes this very difficult and time consuming).”

One small concern I always have, however, is that however good the toolset, the maturity of the users of a tool matters. Application Assurance encompasses a huge range of issues, from resilience and security to removal of coding bugs. It is important that those responsible consider business outcomes for all stakeholders, with a “continual improvement” mindset and don’t just test what is easy and quick to automate. That said, I think that Eggplant has rich capabilities, is easy to use and integrates with a wide range of tools – but it can’t (despite the claims of AI) do your project management thinking for you. Eggplant may integrate well-enough with legacy tools and even legacy approaches but the mindset (culture) of a legacy tester, largely using manual methods, may be very different to that of a modern model-based tester. Even if model-based testing tools easy to use, an organisation with a significant legacy application portfolio may need to put resources (possibly mentoring) into changing the culture of legacy testers and promulgating the new approach.

I think that the takeaways from my chat with Veale and Chung were that, potentially, everything and anything (in the application assurance domain anyway) can be automated, that you can future-proof application assurance with a model-based approach (you can extend the scope of assurance by simply adding to the model). With “software defined” assurance, you can offer your staff a secure career path between assurance and development and, if development mostly becomes replaced by orchestration of pre-built components (entirely possible, in my opinion), there will always be a place for automated assurance of business outcomes.

At a more practical level, it’s all about “doing more with less”, as usual these days, but without compromising assurance quality. You can even automate exploratory testing, perhaps based on existing regression test packs, using AI and ML running 24×7. This is, Veale says, “because the model isn’t building a test case, just showing the possibilities that a user can take across an application or multiple systems.

I think the message is that there is still a place for people skills, just don’t waste them on anything that can easily be automated. And that there is no need for “rip and replace” if your new tools have APIs that support integration with existing tools. One big issue with last generation testing is test case maintenance, which is slow, expensive and error prone. So, as and when test cases for a particular application need serious maintenance, you should consider diverting maintenance resources into migration to modern black-box model-driven assurance approaches.