skip to Main Content

Service Virtualisation

Last Updated:
Analyst Coverage: , and

When you are developing or testing software that will exist within a service oriented architecture (SOA) you may have issues with the stability or availability of the services that you need to consume. This may be because these are under development themselves, they may be controlled by third parties, they may not be completed, they may be live systems that are under performance constraints or they may be only available at inconvenient times of the day.

Service virtualisation is a technique used to virtualise these services so that you can develop and test without regard to the stability or availability of services that you depend on. In effect, use of service virtualisation software allows you to “pretend” that these services are available and stable. There are various techniques that you can use for this purpose: record and playback, creating dummy services, creating echo stubbed responses and so on.

The process of using a service virtualisation product involves creating and deploying virtual assets that simulate the behavior of real components. You only need to create virtual assets for those things that are otherwise difficult or impossible to access and which are required in order to exercise the application under test or development. These assets may be created in a variety of ways, including recording live communications, by means of logs that have recorded historical communications, by analysing service interface specifications such as WSDL, or by manually defining relevant behaviour. Once created assets can be further configured according to requirements.

These virtual assets work by listening for requests and returning an appropriate response with, and this is important, relevant performance. This latter point is important because service virtualisation supports both performance testing (where the asset will reflect typical real performance) and stress testing (simulating real loads or error conditions), as well as other types of testing. As an example, for a relational database, the asset would typically be listening for relevant SQL statements and would then return a relevant row (or column in some data warehousing environments) while for a web service, this might involve listening for XML on a message queue such as MQ Series or JMS and then returning another XML message.

Typical environments in which service virtualisation is recommended include development of distributed and composite applications, complex and/or large scale enterprise integration projects, developments with very distributed dependencies and constraints, performance and functional testing, virtualisation between agile or parallel development groups, and environments (for example, mainframes) where testing resources are constrained.

In other words, service virtualisation is an enabling technology used to support development and testing. It will therefore be of interest primarily to CIOs, developers, testers and quality assurance. However, the use of service virtualisation should mean that better quality applications should be produced more quickly, which is of importance to the whole organisation.

Service virtualisation has been around for approximately a decade and there are a significant number of enterprise users (several hundred across the various suppliers). However, it is only now that it is starting to get the wider recognition that it deserves. While the number of vendors in this market is small (five that we know of) at present, we can expect more companies to enter into this sector as it becomes more well-known and more popular.

However, we would not say that service virtualisation is mature. While functionally all the suppliers in this market are offering the same sorts of capabilities, or are striving or intending to do so, there is a distinct difference between the platforms supported by the different vendors and the environments that they support. For example, one product only runs on Windows while one of the others does not support Windows at all; and similarly different approaches apply to mainframes, legacy database (for example, Adabas) support and Mac OS support. As the technology becomes more popular we would expect vendors to provide a growing breadth of support.

One trend that is already starting to appear is integration with test data management and we also expect to see greater integration with other relevant virtualisation technologies such as defect virtualisation.

The most notable recent changes of the last couple of years have been the acquisition of ITKO by CA; the entry into the market of Grid-Tools, which has extended its test data management capabilities by introducing a service virtualisation offering; and the acquisition of Greenhat by IBM. We expect the facilities provided by Greenhat to be integrated with IBM’s existing test data management capabilities. This raises the possibility that other test data management vendors, such as Informatica, might acquire service virtualisation tools to augment their test data management environments.

Solutions

  • broadcom logo
  • IBM (logo)
  • Parasoft (logo)
  • SmartBear (logo)
  • Tricentis (logo)

These organisations are also known to offer solutions:

  • HP

Research

APPLICATION QUALITY ASSURANCE AT BROADCOM InContext cover thumbnail

Application Quality Assurance at Broadcom

In this Bloor InContext report, we discuss and evaluate Broadcom’s solution for application quality assurance.
Back To Top