It was good to meet Per Kroll (Senior Director, Engineering at Broadcom) again at the Broadcom Mainframe Division’s 2025 Analyst Relations Forum (June 2025, in Cambridge, Massachusetts). I have known him for years and if he now tells me that AI (Augmented – or Artificial – Intelligence), and even “Agentic AI”, which is a more proactive AI, operating autonomously and needing less human input, will have an important place in systems development, then I believe him.

Of course, he is fully aware of the potential issues with autonomous AI and its capability for hallucinations. He suggests that the value from leveraging MCP (Model Context Protocol) architectures that exploit AI will come from identifying high impact, low risk scenarios for its deployment and from building a basis for enabling possible Agentic AI in the future.

A typical example is someone who has to make updates to a COBOL codebase they haven’t worked on before. A tool enabled with Agentic AI can understand the structure of the code and automatically summarize what it actually does, without relying on (very possibly out of date) programmer’s comments. This should result in marked improvements in productivity with low risk (as the Agentic AI  isn’t actually writing new code).

He also envisages Generative AI-enabled general tools rather than separate AI tools and projects – AI is only a means to an end, which is (in the current context) to make life easier and more productive for developers. These days, part of this (and essential to Agentic AI) are new standard protocols that allow different AI-enabled tools to communicate:

  • The Agent2Agent (A2A) Protocol is an open standard designed to enable seamless communication and collaboration between AI agents. This should help to limit the growth of dysfunctional vendor- or framework-based AI silos.
  • The Model Context Protocol is complementary to A2A and is an open protocol that standardizes how applications provide context to LLMs (Large Language Models). Essentially, it provides a standardized way to connect AI models to different resources (such as data sources and tools).

If you link Endevor, Broadcom’s software asset management solution, to AI code via the two protocols mentioned, for example, the AI might summarize what the various assets do, identify dependencies between assets and document the likely impact of any changes, without the developer having to ask for it. Of course, deciding what is “low risk, high impact” implies a judgment call by a human being. If the alternative is no effective impact analysis at all, then an Agentic AI highlighting an impact you might have overlooked, for you to evaluate, reduces risk significantly; but relying blindly on an AI to tell you what the impact of a change is might actually increase risk. An AI might be biased – blind to a certain class of risk, say – or simply hallucinate a dependency that doesn’t exist – which could delay implementation of something important to the business.

We come back to the fundamental issue with AI – it is not really “Artificial Intelligence” but “Augmented Intelligence”, it helps a human make more intelligent decisions. Of course, if the impact of any errors or hallucinations is small and the AI simply helps you cope with more variety than an un-assisted human can manage, then relying on the AI is probably OK; but if the potential impact is high, then less so. Imagine an Agentic AI deciding that your best customer is a deep fake bad actor attempting a fraud and preventing her making a multi-million-dollar order, for example. It will, I think, be important to design systems using Agentic AI with “guard rails” and make sure that they feed back on what they are doing at an aggregated business level to somewhere (a dashboard perhaps) where a manager can notice abnormalities and act on them. I can see Agentic AI as being very useful, nevertheless, especially if implemented effectively for organizations that have adopted automated DevOps Continuous Delivery approaches. It does also occur to me, however, that the use of AI, and especially Agentic AI, built into your delivery pipeline, must increase the “attack surface” for bad actors. Mary Ann Furno (Director of Engineering, Cybersecurity and Compliance at Broadcom), for example, points out that: “Agent identities will need continuous monitoring of access scope with special attention and control over privileged access”. Maintaining a well-planned, well-tested security environment (with human oversight), as discussed by Andy Hayler of Bloor, will be essential. Increasingly so as you rely more on automation.