Amazon’s Alexa magic - are we taking consumer-oriented magic too much for granted

Written By:
Published:
Content Copyright © 2018 Bloor. All Rights Reserved.
Also posted on: The Norfolk Punt

The Register has just highlighted a nasty little Alexa problem – perhaps the horror stories aren’t as far-fetched as we’d hoped: see here. Amazon seems to think that multiple apologies – “He apologized like 15 times in a matter of 30 minutes” – are sufficient “remediation”. Of course, it is going to fix the bug – but it isn’t clear how hard this might be to do, in practice, and even less clear how it will convince anyone that it, and related bugs, really are fixed.

Standards may be lower in consumer electronics – but why? These days, work life and private life are intertwined and Alexa could easily be hearing a confidential work-related conversation.

Increasingly, technology consumers seem to treat it as “magic”. It isn’t magic; the points worth making are, I think:

1/ Even sexy “magic” technology needs to be built right. Not thrown together, with the underlying thought that “if it goes wrong, we’ll just apologise”.

2/ And it needs to be validated, which is increasingly difficult:

  • With sufficient volumes, the unlikely happens regularly;
  • With AI involved, cause and effect is often non-obvious;
  • Systems are usually asynchronous (there is no discrete transaction to commit and back-out – erroneous information may be used before it can be corrected);
  • The only feasible approach is “risk-based” validation – which has consequences. Things will go wrong and remediation processes to address the consequences of failure are therefore essential. These must be designed, validated – tested – and assessed for scope of impact just as carefully as the base system. And the validation of the remediation processes will itself be risk-based…
  • How often do developers think about remediation? And, if there are remediation processes designed up front, how much time is “wasted” on validating them? Remembering that remediation extends beyond the technology domain into the human domain….

The bottom line? Technology isn’t “magic”, it needs real systems engineering underlying it, even if this may rely on statistical risk analyses. And when it does go wrong, there should be designed, validated remediation processes that deal with this. The trouble with treating technology as magic, is that confidence in magic can evaporate as soon something breaks.