Building by simulating

Written By:
Published:
Content Copyright © 2013 Bloor. All Rights Reserved.
Also posted on: The Norfolk Punt

One interesting announcement at Innovate 2013 was a partnership with National Instruments (NI), integrating Rational tools such as Doors, RTC, RQM and Rhapsody with NI testing tools such as Labview and TestStand into a joint “Vision for Agile Product Delivery”.

NI says, on its web site, that its vision is “To do for test and measurement what the spreadsheet did for financial analysis – and, to do for embedded systems what the PC did for the desktop” – Dr. James Truchard, President, CEO, and Cofounder of NI.

It provides engineers and scientists with an integrated software and hardware platform that accelerates the design and implementation of any system that needs measurement and control. Part of this is the supply of simulation-based testing of embedded automotive systems.

However, after talking with Chris Washington (Application Segment Manager at NI) on its stand, I think this joint vision could deliver more than just what Dr Truchard promises for embedded systems. Potentially, the complexity of conventionally IT-oriented business systems composed of thousands of interconnected intelligent ‘things’ could be beyond the capabilities of conventional development methods – and beyond the capabilities of conventional testing approaches, which already can’t test every single path through anything much more complicated than a couple of dozen lines of code and a few conditional loops.

Suppose you take the model behind the IBM and NI joint vision and replace the simulation of an embedded automotive system with a simulation of a business service delivered as a service from the ‘Internet of Things’? We have the service visualisation and application virtualisation tools that we could build on, we could use Monte Carlo simulation to populate the domain of all possible inputs and standard analysis methods to get meaningful statistical insights into the probable behaviour of a complex system being developed, before it reaches production.

This is ‘risk based testing’ with a vengeance but it could handle incremental increases of complexity over that of anything we build today – and it embodies reuse at the vision/model level of something from one sphere in a different sphere, which is intellectually satisfying.

Coming at this approach from a different more conventionally-oriented system engineering point of view, you could see this as development at the business requirements level. High level requirements could be refined in detail and embedded in simulations (using OMG SYSML and constraint language), until the simulations satisfy all the requirements, using the approach described above. The simulations could then be transformed into executable systems.

This probably seems a bit frightening to conventional developers but, as I say here, “it is not dissimilar to the way in which modern cars, built around embedded microprocessor-controlled behaviours, are designed” – and there appears to be nothing too frightening about the way that all the safety-critical embedded systems in modern cars are built and tested today….

In reality, I think I’m just thinking out of the IT systems-development box a bit; rather than suggesting anything too radical.