
Software engineers are at the frontier of AI development and adoption. Indeed, software development dominates all other AI activity within the enterprise, according to Anthropic’s Economic Index 2025, which examines the way AI is being used in both the consumer and enterprise spaces. Among the top 15 use clusters—representing about half of all API traffic—the study found that the majority related to coding and development tasks.
But what does an engineer do when they don’t have existing agentic AI tools for a particular business use case? Many, it seems, are inclined to download and use their own private agentic AI platforms, often without the permission of their employer’s IT department. But while the use of this so-called ‘shadow AI’ might lead to new efficiencies for individual software engineers, it nonetheless engenders significant cybersecurity risks for the wider company.
Shadow AI has long been an issue, but with new autonomous agentic AI capabilities now in play, the problem will likely get worse, according to GlobalData senior technology analyst Beatriz Valle. “Shadow agentic AI presents challenges beyond traditional shadow AI,” says Valle. “Employees handling sensitive data may be leaking this data through prompts, for instance.”
Dr Mark Hoffman leads Asana’s Work Innovation Lab, a research unit within the work management company that focuses on enterprise processes. Hoffman says organisations should assume that shadow experimentation is happening.
“Right now, there is a lot of empty space between the data and context that engineers need for AI to code effectively and what they can actually access with the sanctioned tools in their organisations,” says Hoffman. “Engineers are problem solvers, and if they see a way to make their work easier, they’ll take it.”
That problem is exacerbated, he argues, by the lack of guidance provided by companies on safe avenues for AI exploration. “A smarter approach is to align policy with where engineers are finding real value and to provide official avenues for experimentation in controlled environments,” advises Hoffman. “Engineers are very likely to be experimenting in their personal time with the latest AI tools, so set up a centre of AI excellence for developers and make sure active devs are part of it, not just leaders.”
This may help mitigate the security risks associated with unsanctioned AI usage, which could include inadvertent IP sharing that prompts injection attacks. Even o, Hoffman is quick to point out that, as with the adoption of any new technology, “the full set of risks are still emerging — particularly with agentic AI.”
Risk is not limited to the enterprise: developers, too, are likely to succumb to the fallout arising from unsanctioned use of agentic AI. Not that offenders particularly care. “Many engineers adopt unapproved tools because they worry that asking for permission will only draw attention and likely result in their approach being shut down,” explains Hoffman. “So, they default to asking forgiveness, rather than permission.”
In the long run, says Hoffman, engineers would be better placed to pitch what they are experimenting with and try to get approval for a limited internal proof of concept. “Keep it low risk, test in non-critical areas, build a tiger team, and document the value in time savings, cost savings, or accepted commits,” he advises. “It’s slower than just hacking, but it builds the evidence needed to win leadership support.”
How to detect shadow agentic AI
If accepting shadow agentic AI’s prevalence is the first step, then detection becomes the next challenge. It’s not one that, as yet, has been surmounted. Ray Canzanese, director of Netskope Threat Labs, says shadow agentic AI is already spreading in a noticeable way, with the firm estimating that 5.5% of all organisations have employees running AI agents created with frameworks such as open-source application builder LangChain or the OpenAI Agent Framework.
“On-premises deployments are often much harder for security teams to detect,” says Canzanese. “An engineer running an agent on their laptop with a framework like LangChain or Ollama can create a blind spot. That is why visibility, real-time coaching, and clear policy are essential to manage this emerging practice.
According to Canzanese, AI platforms are the fastest-growing category of shadow AI precisely because they make it so easy for individuals to create and customise their own tools. Does Canzanese imagine there will ever be a point at which every specific use case will be served by agentic AI? “With time, yes,” he says. “The growth of platforms like Azure OpenAI, Amazon Bedrock, and Google Vertex AI makes it much easier for individuals to spin up custom agents that fit their own workflow. In time, though, we can expect vendors to cover more of these use cases, but at the moment it is very accessible for engineers to build their own.
“We’ve seen that the average organisation is already uploading slightly more than 8GB of data a month into AI tools, including source code, regulated data, and other commercially sensitive information. If that flow of data is happening through unauthorised agents as well as unmanaged GenAI, the risks multiply quickly.”
A version of this piece originally appeared on Verdict.