Conversational AI with Druid and ChatGPT

Written By:
Published:
Content Copyright © 2023 Bloor. All Rights Reserved.
Also posted on: Bloor blogs

Conversational AI with Druid and ChatGPT banner

I have been looking at a Conversational AI platform called Druid, at why it is more than just another ChatBot, and at how it might be impacted by ChatGPT from OpenAI, the latest “silver bullet” (in some interpretations) for machine-to-human communication.

Well, a software platform is something that hosts holistic services and provides a foundation for effective operation of the services. One of these services might indeed be a basic ChatBot, a software component designed to simulate conversation with human users; other services might provide statistics for the interaction through visual dashboards, with the use of near-real-time derived metrics for quality, performance, and ROI. Automation can go as far as you are comfortable with, so first level support, say, might be fully automated with a chatbot; more difficult issues might be escalated to a human being, with access to the conversation so far.

The latest release of DRUID Oxygen, for example, comes as a platform with a connector for UiPath robotic process automation (RPA) robots (there is an example of how this might work here), conversational flow and entity designers and a collection of pre-configured solutions using conversational AI for a range of different industries, use-cases, and business roles.

A real-life bot conversation can be used as a source to improve the understanding of natural language by the ChatBot and it can run continuous testing, to help validate its evolving behaviour. Building a ChatBot can be as simple as dragging and dropping actions into a flow diagram, but its behavior in the field can be monitored and improved continuously.

And, the platform should help to assure security, performance, privacy, resilience and so on. A platform supplies all you need to deliver a business solution, not just a simple app.

So, now ChatGPT comes along, using advanced AI to understand what is being said to it and then to make sensible replies, and it is, by all accounts, very good at this. Surely, this will make something like Druid obsolete? Well, no, ChatGPT may be extremely plausible, but it can’t actually think. Its AI is very advanced; it uses something called a Generative Adversarial Network (GAN) that can create new art, music – and conversations. Simplistically, this is two neural nets fighting each other: a neural net is trained on a lot of data to generate content and a second neural net learns to accept or reject this content based on specified criteria for acceptability. The result is something that looks like an artwork or a piece of music or a conversation. But how “meaningful” is this result really? It can mimic human creativity but might some of its answers, even if logically correct, be unethical? I don’t think I’d like to have blind faith in it for anything really important, and I also wonder how much resource this sort of AI uses and what its carbon footprint is. Nevertheless, ChatGPT could be a very useful adjunct to a platform like Druid.

Which is why there is a very interesting article in Druid’s Monthly Digest about generative AI and ChatGPT, going into more detail than I have space for here. Perhaps a good finish for my blog is to quote from this article, where it asked ChatGPT itself about its limitations:

As a language model, my main limitation is that I am based on patterns and correlations that I have seen in my training data,” ChatGPT says. “While this allows me to understand and generate text that is similar to human text, it also means that I can make errors or produce text that is not appropriate or accurate in certain situations. My responses may be biased based on the training data that I have seen and I might not understand certain idiomatic expressions or cultural references.

“Additionally, I don’t have physical access to the world,” it continues, “I don’t have consciousness or feelings, so I don’t have the ability to sense or experience the world, so I can’t provide personal opinions or experiences”.

It writes well, doesn’t it? And it makes sense. ChatGPT goes on to advise readers to verify any information it provides. Good advice – partly because, how can I be sure that my quote from ChatGPT wasn’t simply hard-coded somewhere as a facile response to an obvious question?