Zoho [ has just announced its own large language model, Zia LLM, built with NVIDIA’s AI accelerated computing platform. Zoho already supports a range of 3rd party AI tools but the advantages of Zia LLM are (first and foremost) that all data remains on Zoho services (thus addressing the privacy/security risks associated with AI); that it has been developed with Zoho product use cases in mind; and that it comes with three separately trained and optimised models. with 1.3 billion, 2.6 billion and 7 billion parameters, which allow users to optimize AI costs for particular applications. Zoho says that focus on right-sizing the model is an ongoing development strategy for Zoho and its CTO talks about why it built its own LLMs here.
Critically, in our opinion, Zoho is also providing an effective and comprehensive ecosystem around its AI products, with :
- Two proprietary Automatic Speech Recognition (ASR) models for speech-to-text conversion for both English and Hindi. Zoho claims that these models benchmark up to 75% better than comparable models across standard tests (and there will be support for additional languages in future). This is a “high impact, low risk” application for AI
- A Zoho model context protocol (MCP) server with a rich action library across several applications, allowing any MCP client to tap into data and actions from various Zoho apps, while respecting the customer’s defined permission structures.
- AI Agent Studio, which was first announced earlier in 2025, but has now been given a simplified prompt-based user experience (although it still offers the option of a low-code experience) and includes ready-made access to over 700 actions across Zoho’s products. Agents built by users can be deployed autonomously (which I see as rather higher risk), triggered through button click or rule-based automation, or summoned within customer conversations.
- The option to provision an agent as a digital employee, as it is deployed. This is a particularly interesting part of the ecosystem, although I’d see it as adding extra risk again. Nevertheless, Zoho Digital Employees respect defined user access permissions, and maintain the same permissions structures that the organization already defines. In addition, Admins are able to perform behavioral audits as well as performance and impact analyses on Digital Employees, which should also help to enforce guardrails and manage risk.
- Zoho Marketplace, which I see as critical to providing a fully effective ecosystem and which supplies over 2500 reliable extensions and integrations for Zoho users, now includes an Agent Marketplace. This is aimed at ecosystem partners, ISVs, and individual developers creating Zoho agents and will facilitate the adoption of agentic technology.
This all sounds very interesting, but I don’t have space to go into details here. What I do want to look at is the concept of “explainable AI”, which means, in effect, that you can determine the plausibility of a generative AI outcome or decision in some rational way. Such an explanation should be “out of band” – i.e. not using generative AI – since any explanation offered by the AI will probably suffer from the same biases as might be compromising the original result. It is important because generative AI can generate (or “hallucinate”) plausible, but entirely spurious results (always check out any citations a generative AI provides) and is critically susceptible to biased training data.
However, Bloor analyst Andy Hayler pointed out to me that this “explainable AI” issue is probably worse than I thought, and he provides some useful (human generated) citations:
“No LLM can genuinely explain its reasoning,” he says, “The best that can be done is to quote the sources used. What can be done is to ask the LLM about its “chain of thought” and it will then come up with a plausible-sounding post-hoc explanation. However, these explanations do not and cannot reflect the underlying computations that led to the answer, which depend on hundreds of layered neural networks and billions of parameters. More on this is explained in this discussion thread here. So an LLM can pretend to explain its reasoning, but such explanations are fiction. The underlying neural computations do not preserve, or even construct, a transparent reasoning pathway that could be reported afterwards”. A more in-depth explanation with an example can be found here.
Nevertheless, it seems to me that once an AI has given its answers and stated its sources, non-generative AI (machine learning?) or conventional analytics ought to be able to justify (possibly only approximately) the conclusions the AI came to. If it can’t, the user of the generative AI ought to be told that the AI result is just “magic” – use with caution and be prepared to look silly if anyone questions you. I feel sure that non-generative validation could be built into generative AI products. But I then question why one is not just running the validation without bothering with the AI? Validation probably won’t be built into all AI tools anyway, because many people will just want to run a generative AI until it hallucinates the answer they wanted in the first place.
I asked Sujatha Iyer of Zoho about the explainability issue, and she confirmed Andy’s POV: “So I think this is a classic case of confirmation bias that these LLMs have, right? So if you say, you know the answer is right, the LLM would say ‘absolutely you are right, the answer is right because so and so and so and it will even come up with a citation’; and then you say ‘no, you’re wrong because so and so’ then it would again agree with you. So, what I would say is that using an LLM is one thing, but there is no specific way to say that [its answers are accurate], or, at least, this confirmation bias is something that is inherent in the nature of LLMs. This is the classical difference between deterministic and probabilistic systems, right?”
Sujatha works for the central R&D team focusing on long-term AI research at Zoho Corporation. She heads the division that focuses on AI in security – working on use cases like anomaly detection, malware, user entity behaviour analysis and so on. So she is well-placed to emphasize that not every query is an LLM opportunity and that Zoho gives its users a complete app environment with rich analytics opportunities that deliver deterministic answers without incurring what she calls “a GPU tax” – the overheads of LLM processing. She also points out that, while hallucinations etc. could be a major problem in social media and business-to-customer environments (although often no-one cares if you fake a funny cat picture and give it 5 legs), Zoho operates in the more disciplined business-to-business arena. All of the Zoho apps (including conventional analytics tools as well as LLM tools) interoperate with Zia Search becoming the RAG (Retrieval Augmented Generation) database and reside on top of the Zoho Directory, which controls what data they can access. “If you want to do fact checking”, she says, “I would suggest using deterministic cost algorithms that can pinpoint the exact root cause, instead of LLM tools”.
The LLM use cases Sujatha identifies as most useful include natural language processing, augmentation of Zoho Search when dealing with large amounts of low-risk data, and summarizing of meeting transcripts and call records.
So, I am left with the view that, for Zoho at least, generative AI with LLMs is a useful tool to have in the box, but should only be used when its use is appropriate, not as a ubiquitous Swiss army knife. In the business environment, very often, conventional analytics will give you better quality, more repeatable results, at lower resource cost. This rather reflects my views but I suppose that you could use these deterministic approaches to conform a result from an LLM tool. Even so, (unless the LLM tool is helping you to make an initial assessment of a chaotic dataset) why not just run conventional analytics in the first place? And then perhaps use an LLM tool to precis the report for managers and draw some pretty illustrations for an e-book report.