A trusted AI platform

Written By:
Content Copyright © 2023 Bloor. All Rights Reserved.
Also posted on: Bloor blogs

A trusted AI platform banner

AI is the coming thing, apparently, but there is still argument over how intelligent and trustworthy “Artificial Intelligence” actually is. I prefer the idea of Augmented Intelligence – a directing human intelligence, that is assisted by machine reasoning. This is evolving towards real intelligence, but who knows when it will get there? I guess the breakthrough point might be when an AI switches itself on and does business for itself, as an employee. Or, more worryingly, when it lies to its humans in pursuit of its own interests. But that is a long way off.

For now, we need AI to be explainable and transparent, so we are not blindly trusting what a “black box” tells us; reliable (robust), so it gives repeatable results; and fair, so it is free from any biases from its human sponsors.

In practice, however, most AI falls far short of this ideal. Typically, for example, machine learning can’t distinguish between correlation and causation, so that humans may find it hard to trust its suggestions for future actions, when they are trying to change predicted outcomes.

Addressing these AI issues implies that we are using something a bit more than just an AI app – we need a supportive AI platform. And I have been talking to the vendors of such a platform: causaLens. A lot of AI insights are still based on discovering correlations, verified by examining past behaviors; but some companies are beginning to look at cause and effect. Aviva, for example:

Causal AI plays an important role in our investment analysis. It empowers our strategists and portfolio managers to generate alpha by identifying new causal relationships in economic, financial and alternative data, with sophisticated, adaptive and explainable models that don’t suffer from overfitting”

Michael Grady, Head of Investment Strategy & Chief Economist, Aviva Investors.

The analysis of cause and effect in Causal AI helps people to understand the underlying drivers for a system. This makes Causal AI both more robust as the environment changes; and better able to cope with situations they haven’t seen before. Moreover, Causal AI models are inherently more transparent and trustworthy, as they can be used to explain the reasons behind a decision or prediction.

Cause and effect in AI is a bigger topic than I have space to deal with in this “heads up”. In essence, however, it is all about trust – Trustworthy AI – and Causal AI is something all mutable businesses (that will need AI to help manage their constant evolution) should be keeping a sharp eye on.