Software engineers are at the frontier of AI development and adoption. Indeed, software development dominates all other AI activity within the enterprise, according to Anthropic’s Economic Index 2025 which examines the way AI is being used in both the consumer and enterprise spaces. Among the top 15 use clusters—representing about half of all API traffic—the study found that the majority related to coding and development tasks.
Debugging web applications and resolving technical issues each account for roughly 6% of usage, while building professional business software represents another significant chunk.
Access deeper industry intelligence
Experience unmatched clarity with a single platform that combines unique data, AI, and human expertise.
But what does an engineer do when they don’t have existing agentic AI tools for a particular business use case? It’s exactly the kind of problem solving that is second nature to engineers. After all, creating automated solutions to human problems is the cornerstone of the role. But this shadow AI—the unsanctioned use of AI tools and applications by employees within an organisation without knowledge or oversight of the IT or security departments—carries significant risk.
Shadow AI has long been an issue, but with new autonomous agentic AI capabilities the problem will likely get worse, according to GlobalData senior technology analyst Beatriz Valle. “Shadow agentic AI presents challenges beyond traditional shadow AI, because employees handling sensitive data may be leaking this data through prompts, for instance.
Dr Mark Hoffman leads Asana’s Work Innovation Lab, a research unit within the work management company that focuses on enterprise processes. Hoffman says organisations should assume that shadow experimentation is happening.
“Right now, there is a lot of empty space between the data and context that engineers need for AI to code effectively and what they can actually access with the sanctioned tools in their organisations. Engineers are problem solvers, and if they see a way to make their work easier, they’ll take it,” says Hoffman.
US Tariffs are shifting - will you react or anticipate?
Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.
By GlobalData“Too many companies have offered little guidance or safe spaces for AI exploration, which only drives more unsanctioned use. A smarter approach is to align policy with where engineers are finding real value and to provide official avenues for experimentation in controlled environments,” he advises.
“Engineers are very likely to be experimenting in their personal time with the latest AI tools, so set up a centre of AI excellence for developers and make sure active devs are part of it, not just leaders.”
All of which will go some way to mitigate the security risks which may include inadvertent IP sharing to prompt injection attacks. And as with the adoption of any new technology, “the full set of risks are still emerging, particularly with agentic AI,” adds Hoffman.
Risk is not limited to the enterprise: engineers may shoulder the burden of any agentic AI fallout. “Many engineers adopt unapproved tools because they worry that asking for permission will only draw attention and likely result in their approach being shut down. So, they default to asking forgiveness, rather than permission,” Hoffman explains.
But a better way is for engineers to pitch what they are experimenting with and try to get approval for a limited internal proof of concept. “Keep it low risk, test in non-critical areas, build a tiger team, and document the value in time savings, cost savings, or accepted commits. It’s slower than just hacking, but it builds the evidence needed to win leadership support,” suggests Hoffman.
Shadow agentic AI needs detecting first
If accepting shadow agentic AI’s prevalence is the first step, then detection becomes the first challenge, because by its very nature, the practice is intended to fly under the radar of internal processes. Ray Canzanese, director of Netskope Threat Labs says shadow agentic AI is “already happening in a noticeable way”. Netskope’s own research found that 5.5% of organisations have employees running AI agents created with frameworks such as open-source application builder LangChain or the OpenAI Agent Framework.
“That might sound small, but it’s significant given how new these tools are. It mirrors the broader trend we see across AI, where employees first bring the technology in as shadow AI, and then continue to rely on personal or unmanaged apps, even as companies roll out enterprise-approved solutions,” explains Canzanese.
While the need for specific use cases is driving shadow agentic AI, as Hoffman suggests, accessibility to custom agent building tools being so widely available, easy to use, and often free to experiment with is compounding the problem.
According to Canzanese, AI platforms are the fastest-growing category of shadow AI precisely because they make it so easy for individuals to create and customise their own tools. Does Canzanese imagine there will ever be a point at which every specific use case will be served by agentic AI? “With time, yes,” he says.
“The growth of platforms like Azure OpenAI, Amazon Bedrock, and Google Vertex AI makes it much easier for individuals to spin up custom agents that fit their own workflow. In time, though, we can expect vendors to cover more of these use cases, but at the moment it is very accessible for engineers to build their own,” he says.
In the meantime, the fact that agents often have direct access to company data and the ability to act autonomously creates significant data security risks, as well as a loss of visibility.
“On-premises deployments are often much harder for security teams to detect. An engineer running an agent on their laptop with a framework like LangChain or Ollama can create a blind spot. That is why visibility, real-time coaching, and clear policy are essential to manage this emerging practice, advises Canzanese.
With the proliferation of data and data intensive enterprise tools, the cyber security risks increase exponentially. Every IT leader’s nightmare data breach scenario is made more likely with the use of shadow IT. IBM’s 2025 cost of a data breach report found that almost half of all cyberattacks are liked to shadow IT, resulting in an average cost of over $4.2m.
According to Canzanese, the average organisation is already uploading slightly more than 8 GB of data a month into AI tools, including source code, regulated data, and other commercially sensitive information. “If that flow of data is happening through unauthorised agents as well as unmanaged genAI, the risks multiply quickly,” he says.
With agentic AI’s “blast radius” so much greater than existing AI, the potential for cyber security incidents, malicious or otherwise, increases exponentially. All of which makes shadow agentic AI an enterprise security risk no business can ignore
