Despite many workplaces now being flooded with agents, and great optimism about their potential for businesses, a new Harvard Business Review report has sent us crashing down to earth. According to the findings, only 6% of companies fully trust AI agents to autonomously run their core business processes.
That number may seem startlingly low, but it arguably shouldn’t be surprising. Most enterprises don’t lack AI capabilities. Instead, they lack the guardrails and shared context that make AI agents trustworthy.
Access deeper industry intelligence
Experience unmatched clarity with a single platform that combines unique data, AI, and human expertise.
The flaw in single-player AI
The problem is not that AI agents are incapable. It is that they are often deployed as black boxes, acting on prompts written by a single individual, in a private chat, disconnected from the wider business. This is why autonomy is the wrong goal. The real unlock is human-AI collaboration, where agents behave like teammates, inherit the right permissions, and stay on rails.
Most current AI tools are optimised for ‘single-player mode’, with one person interacting with one agent. The outputs can be impressive, but it can also result in AI ‘slop’ that lacks accuracy, relevance or the shared understanding needed to move a team forward. When agents operate independently, each generating their own outputs without reference to one another, there’s a risk of duplication or, worse, contradiction. When this happens, AI agents are not streamlining processes – they’re slowing them down.
In larger organisations, the reality is that work is not advanced by isolated activity. It moves through coordination, with shared plans, clear ownership, agreed priorities and visible progress. That is why future evolution should be towards ‘multiplayer’ formats.
When agents work inside shared projects and workflows, multiple stakeholders can see their plans, coach them and adjust guardrails in real time. This gets us closer to a future where the organisation is ‘self-driving’, but humans always stay at the wheel for strategy and trade-offs.
US Tariffs are shifting - will you react or anticipate?
Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.
By GlobalDataThe need for stronger guardrails
That doesn’t mean we shouldn’t trust AI agents. We just need to be smarter about how we use them, choosing context, checkpoints and controls over autonomy.
Trustworthy AI agents should behave like teammates, and that starts with permissions. Human employees operate within defined access boundaries and AI agents should inherit the same role-based controls. They should only see what their human counterparts can see, and only act where they are authorised to act.
Transparency is equally important. Reliable teammates do not work in the dark. They write down their plans, show their progress and invite feedback. AI agents should do the same, operating inside shared task and project structures where their actions are visible to – and reviewable by – multiple stakeholders, not hidden behind a single prompt history.
Perhaps most importantly, however, agents need context. In human teams, context informs exactly what is important. Who is involved? What does success look like? How does this piece of work connect to broader company goals? Without that shared understanding, even highly capable individuals struggle to make good decisions. AI agents are no different.
Providing context requires organisations to invest in structure. Clear tasks, accountable owners, defined projects and explicit goals are the rails that enable agents to operate safely and effectively at scale. When agents run on rails within structured workflows and against clear checkpoints, leaders can see where they are adding value and where they need adjustment.
The most valuable agents don’t just execute tasks; they learn from every interaction. Over time, they build up a shared memory of how your organisation actually works —while still respecting the same access controls and policies as any employee.
That combination of context, checkpoints and controls is what meaningfully improves quality of execution—shifting trust from a mere 6% to something much higher.
Creating a more collaborative environment
Moving forward, the real opportunity is an enterprise where agents can discover one another, collaborate across systems and coordinate work alongside humans. Over time, agents will increasingly operate across teams and functions—following work from goal‑setting through execution, surfacing cross‑project risks, and coordinating the routine hand‑offs that slow organisations down today.
The AI landscape is advancing at a breakneck pace. Rather than betting the business on a single, proprietary model that may fall behind quickly, enterprises will need an open, interoperable layer that can plug into best‑in‑class reasoning capabilities as they emerge, while keeping their own data, policies and governance consistent across providers. That flexibility is central to trust: it lets organisations upgrade their intelligence without rewriting their guardrails every time the underlying technology shifts.
For enterprise leaders, the takeaway is clear. Are we building the conditions that make trust possible? In getting governance right, agents stop feeling like opaque bots and more like reliable collaborators that can provide organisations with a decisive advantage.
