Generative AI – is it really intelligent?

Written By:
Published:
Content Copyright © 2023 Bloor. All Rights Reserved.
Also posted on: Bloor blogs

Generative AI – is it really intelligent? banner

“Artificial Intelligence” has been around for years, since the term was invented in 1956 (according to Dr James Sumner of Manchester University, speaking on the BBC’s Tech and AI podcast). And, I think (for reasons that should become clear), that “Augmented Intelligence” – machine-assisted human intelligence – would have been a better term.

It has suddenly become mainstream, since the development of Generative AI – AI that can generate the sort of art or conversation that humans recognise as such. A prime example is OpenAI’s ChatGPT, but every social media or search platform is rapidly implementing something similar, and I even hear of people expecting Generative AIs to write all their computer code for them, any day now.

Nevertheless, critical thinking still rules, or should do. I have heard ChatGPT described as an example of Generative AM – Generative Artificial Mansplaining:

  • It is verbose and plausible but can be content-free;
  • It can contain truths and errors and you can’t tell which is which;
  • It can be driven by all sorts of misogynistic, sexist, racist and other biases without those involved realising it;
  • It takes no account of who it is talking to and what knowledge/experience they might have already;
  • References and citations may just be made up;
  • It can waste a lot of time for all concerned– “all sound and fury signifying nothing”.

Remember that training the AI is largely a human activity – if the human training the AI to recognise, say, cats has a blind spot, if (perhaps) they regard a cat without a tail as not a real cat, then the AI will not recognise Manx cats. The AI can reflect the misconceptions and (possibly unconscious) biases of the people training it. This can be very hard to correct, as people have found out when training AI’s to take part in social media interactions – however well-intentioned the initial training, the AI tends to learn the sexist and misogynist tropes (if any) of the real people in the interactions it meets as it gains experience of the real world.

According to Dr Michael Pound, Associate Prof in Computer Vision at University of Nottingham, speaking on the BBC’s Tech and AI podcast (BBC Radio 4 – Understand, Tech and AI, Tech and AI: What is AI?), Generative AI is simply, as I’d put it, predictive text on steroids. A Generative AI, such as ChatTGPT, generates new conversations, one word at a time (so, if the first word is “the”, there is a 90% liklihood that the next word is “story”, based on having read all the conversations on the Internet, and so on). “I suppose that what is spectacular about this,” he says, “is that just by predicting one word at a time, you can get all this incredible text out; you’d think that by just predicting one word at a time it would soon degenerate into nonsense but it doesn’t seem to”. Which, as the podcast points out, is why ChatGPT can sometimes generate fairly plausible nonsense, because the AI has no concept of truth or usefulness or meaning.

This immediately generates two thoughts for me, first, that there is probably a good PhD thesis for someone nailing down just why the output of a Generative AI doesn’t rapidly degenerate into meaningless noise; and, second, that this is probably, in part, because the AI has access to vast amounts of real-world data and can still generate its responses in what a human perceives as “real time”, which probably implies that it is using vast resources and computing power, and thus heating up the planet, which raises some issues in itself.

Dr Pound goes on to say that what the Generative AI gives us tends to be what we expect to, or want to, read, and that that is not necessarily true. As the podcast says it’s a really well-versed sycophant – which means, I think, that we must be particularly alert for confirmatory bias. Dr Pound also points out that what a Generative AI generates has elements of truth in it, even when it gets the semantics all awry, and that this can make it even harder to recognise when it is actually wrong or misleading.

Given all this, could a Generative AI generate code, given suitable training? I see no reason why not – it can (usually) generate plausible text, so, since the rules for language are more complex than those for code and more tolerant of ambiguity, compilable code should be easy by comparison. Although it might help if the AI was trained on coding manuals and examples, not on the entire Internet.

Of course, the code “quality” (a concept that the AI has no intrinsic idea of) you get will depend on the codebase it has been trained on. In a previous life, I was involved in IT quality assurance in a large international bank, and so this possibility makes me a bit nervous – in most codebases, “common practice” is not necessarily “good practice”.

I once had a boss in a research institute who was proud of his ability to dictate COBOL programs to his secretary that compiled first time. I asked the IT director in the institute about this and he said yes, they compiled first time but it sometimes took ages to get them to actually work. I imagine that the output of a Generative AI “programmer” is a bit like that, especially as the business outcomes it is coding for become more complex.

All the human, commissioning, programmer has to do then, is to validate the Generative AI’s code (and, sometimes, or usually, validating code might take longer than writing it). But the danger lies in that old comedy routine “the computer says NO”. People tend to believe, at least after an initial period of validation, everything the computer says. The Generative AI can presumably check that its code compiles, but it has no idea whether the outcomes that result are what you want, or should want – or even if they are even vaguely ethical. We are not, I hope, just building code, but building business outcomes, and we mustn’t just trust what a Generative AI tells us. And even if we build in an extra feedback loop and tell the AI that we like what it is doing, as a human programmer I can recognise empty praise (i.e., that you aren’t well-informed enough for your positive feedback to mean anything much) but I very much doubt if a Generative AI can – yet. An Artificial Intelligence is not, today, intelligent as we are. There is intelligence there, but it is ant-like intelligence, Dr Pound suggests, not human intelligence as we understand it.

Could a Generative AI “evolve” true intelligence, then? Possibly, I see no reason why not, but that is a huge leap and we are no-where near that stage yet, in my opinion. In the meantime, I am certainly not saying that AI is useless, it will be very useful, it is just that it has to be designed as part of an augmented intelligence system, which has different risks to a purely human system and which needs different – human-oriented – controls.