There has been a flood of interest and investment in AI since the release of OpenAI’s ChatGPT in November 2022, which gained a remarkable hundred million users in two months. By comparison, Instagram took two and a half years to reach that milestone. Enterprises have scrambled to assess its impact, and explore the wide range of applications that generative AI make possible. Generative AI can write marketing copy, hold a conversation as a customer service chatbot, generate high-quality images and videos, produce software code and look for patterns in medical images, amongst other applications. A torrent of venture capital has flooded in, with $110 billion in 2024 in AI-related investments, a 62% increase over 2023. The pace is continuing in 2025, with Q1 2025 seeing almost $60 billion in AI investment. Around half of all venture investment is now in AI.
With all that investment behind it, it is not surprising that enterprises have been following the money, with 78% of enterprises using AI in 2024 according to McKinsey. Yet many early deployments have hit issues, partly due to the fact that the large language models (LLMs) behind generative AI “hallucinate” around one in five of their answers, generating often plausible but made-up, and sometimes nonsensical, answers. For some things that do not matter so much: for example, if you don’t like a generated logo for your start-up then just run the AI a few more times until it does. Yet clearly for many enterprise uses, consistency and accuracy are important traits: you don’t need to keep a calculator alongside your Excel spreadsheet and double-check its answers. Some classes of use cases are more suitable for generative AI than others.
The latest trend is not just to ask an AI a question, but to set it free. Agentic AI is the idea of having autonomous AI agents carrying out tasks, often by chaining multiple AIs together, as well as calling other programs. An agentic AI might not just research a possible holiday for you, but actually book the flights and hotels if you give it your credit card. This is an emerging area and one that brings with it many risks. It is so new that only recently have there been studies into whether it really works, and how effective it is compared to humans.
Quite apart from the effectiveness of agentic AI, there are major concerns about the security implications of letting AI agents off the leash in your enterprise. In order to operate effectively, it will be necessary for an AI agent to have access to corporate IT systems and be capable of executing programs. The tech world has only recently started to build protocols to handle such activities, and the emerging Model Context Protocol (MCP) has some worrying aspects. Beyond this particular protocol, there are other troubling aspects to the headlong rush to implement AI. A 2025 SoSafe study found that 87% of enterprises had suffered from an AI-related cyberattack in the last year. A 2024 IBM study found an average cost of $4.9 million per breach, yet just 24% of generative AI projects are secured. In the SoSafe study, a striking 98% of security professionals felt that the “resilience gap” is widening, meaning that enterprises are more vulnerable than they were in the past. Voice phishing attacks quadrupled in 2024 according to a Crowdstrike report, while a technique called “steganography” means that malware can be hidden inside images. If an email with such an image is clicked on, then the malware can be delivered and do its work.
Some attacks using AI are quite sophisticated, and some don’t even require access to the corporate IT systems. Engineering firm Arup fell victim to an elaborate deepfake scam that cost them $20 million. A similar attempt failed recently at WPP. AI enables more realistic and customised phishing emails as well as deepfakes. AI malware has already appeared and been used, including polymorphic malware that changes its code structure each time it infects a new system, making it hard to deal with. Agentic AI represents an enhanced security risk compared to just using LLMs as chatbots, since they need to be granted extensive access to corporate systems in order to do their job. They may require privileged (administrator) access if they are to access multiple corporate systems. Consequently, they are a juicy target for hackers.
Enterprises are not defenceless against this AI onslaught. Continuous monitoring of networks and systems is vital, as is threat detection that identifies unusual patterns of behaviour. Filters can be implemented to check email and chatbot prompts for suspicious content. Software is available that will reduce the attack surface available to hackers. Preventative measures should include the monitoring of third party partners that send data to your enterprise, as well as internal systems. Establishing robust data governance processes will help, as will incorporating security assessments into each stage of AI model development and deployment. Model access should be restricted as far as possible to those who need it, and data should be encrypted, both for data at rest and in transit. The ISO standard information security management system 27001 can provide a broad framework for these efforts. As enterprises implement AI models directly, and install third party software that use AI, it is vital that they proactively consider the security implications of these applications. This is especially true of agentic AI, which needs to be tightly controlled and monitored if it is not to become a Trojan Horse for malware, willingly brought inside the corporate firewall by companies eager to explore the latest in AI.