Executive Trust in AI Agents Just Collapsed: From 43% to 22% in Six Months

Executive confidence in AI agents dropped from 43% to 22% in six months. This isn't skepticism about AI capability. It's a rational response to deployments that revealed how little infrastructure exists to make autonomous AI trustworthy.
Executive Trust in AI Agents Just Collapsed: From 43% to 22% in Six Months

For most of 2024, the enterprise AI narrative was a sprint toward autonomy. Vendor demos featured agents that could handle customer service inquiries end-to-end, execute financial transactions independently, and orchestrate multi-step workflows without human intervention. The promise was irresistible: deploy AI agents, reduce headcount, increase throughput.

Then the trust numbers came in.

The Capgemini Research Institute’s 2025 study on agentic AI documented a collapse that should have reverberated through every boardroom investing in autonomous systems: only 27% of organizations said they trusted fully autonomous AI agents, down from 43% a year earlier. That’s not a gradual decline. That’s a crisis of confidence compressed into twelve months.

And here’s the part that makes the data genuinely strange: despite this trust collapse, adoption was actually accelerating. Nearly four in ten organizations were piloting or deploying AI agents, and a large majority believed early movers would gain a competitive advantage. Enterprises were simultaneously losing faith in autonomous agents and pouring more money into them.

The trust-investment disconnect

The Capgemini data wasn’t an outlier. Multiple research efforts published in late 2024 and early 2025 documented the same pattern from different angles.

Harvard Business Review Analytic Services report based on a survey of 603 business and technology leaders found that only 6% of companies fully trusted AI agents to autonomously run their core business processes. Six percent. Meanwhile, 43% said they trusted agents only with limited or routine operational tasks, and 39% restricted agents to supervised use cases or noncore processes.

The barriers weren’t abstract. Thirty-one percent cited cybersecurity and privacy concerns as the main challenge, followed by anxiety over data output quality at 23%, unready business processes at 22%, and technology infrastructure limitations at 22%.

PwC’s AI Agent Survey of 308 US business executives found a revealing distribution of trust. Respondents expressed the highest trust in agents performing data analysis (38%), performance improvement (35%), and daily collaboration with humans (31%). But trust dropped sharply for financial transactions (20%) and autonomous employee interactions (22%). Twenty-eight percent ranked lack of trust as a top-three challenge for their AI initiatives.

The pattern is clear: executives trust agents to analyze. They don’t trust agents to act. The gap between those two things is where billions of dollars in enterprise AI investment is currently stuck.

Why trust collapsed so fast

The speed of the decline matters. Going from 43% trust to 27% in a year doesn’t happen because of gradual disillusionment. It happens because specific events shattered assumptions.

Several factors converged in 2024 to undermine executive confidence. High-profile failures in AI agent deployments, chatbots making unauthorized promises, agents executing incorrect transactions, automated systems producing confidently wrong outputs, accumulated into a pattern that couldn’t be dismissed as edge cases.

The transparency problem amplified every failure. As CIO.com reported, the fundamental irony facing AI agents is that they need decision-making autonomy to provide full value, but the reasoning behind their actions remains invisible to deploying organizations. When an agent makes a good decision, nobody asks how. When an agent makes a bad decision, nobody can explain why.

Rivka Deasey-Weinstein, an AI risk and ethics specialist, captured the dilemma in her comments to CIO.com: “Whilst agents are amazing because they take the human out of the loop, this also makes them hugely dangerous. We’re selling the prospects of autonomous agents when what we actually have are disasters waiting to happen without stringent guardrails.”

Her recommendation was counterintuitive for an industry focused on expanding agent capabilities: “The most trustworthy agents are boringly narrow in their ability. The broader and freer rein the agent has, the more that can go wrong with the output.”

Boring and narrow. Not exactly the pitch that sells board-level AI investment.

The autonomy-without-accountability gap

What the trust data really exposed was a governance deficit. Organizations deployed agents with broad autonomous capabilities but without the corresponding accountability frameworks to manage them.

The IBM Institute for Business Value survey of 800 C-suite executives found that 24% said AI agents were already taking independent action in their organizations, with 67% expecting that to be the case by 2027. But 78% agreed that achieving maximum benefit from agentic AI requires a new operating model, an acknowledgment that the current organizational structures weren’t designed to manage autonomous decision-makers.

The gap between deployment velocity and governance readiness explains the trust collapse better than any single failure incident. Organizations raced to deploy agents in customer-facing and operational roles without building the infrastructure to monitor what those agents were doing, audit their decisions, or intervene when things went wrong.

I’ve spent years building AI-powered support systems that serve over 170,000 users, and the governance question is one I wrestle with daily. The technical challenge of building an agent that can make autonomous decisions is significant but solvable. The organizational challenge of defining who is accountable when that agent makes a wrong decision, and building the monitoring infrastructure to detect wrong decisions in real time, is harder. And it’s the part that most organizations skip because it doesn’t have a demo.

The organizations that rebuilt trust

Not all the data pointed downward. Buried in the surveys were signals about what separates organizations rebuilding trust from those watching it erode.

The IBM research identified a clear pattern: transformation-driven organizations, those treating agentic AI as a catalyst for new operating models rather than an efficiency layer on existing processes, were pulling ahead. The critical difference wasn’t technology sophistication. It was organizational design.

The HBR report found that organizations were turning to “enterprise orchestration”: connecting systems, data, and applications into a governed layer that could safely power agents at scale. Eight percent had already implemented enterprise orchestration for agentic AI, and 74% were either working on it or planning to do so.

Meanwhile, the G2 survey from October 2024 showed the granularity of where trust was forming. Organizations were comfortable giving agents autonomy in narrow, well-defined operational tasks: auto-blocking suspicious IPs, rolling back failed software deployments, flagging anomalies in data pipelines. These are tasks with clear success criteria, limited blast radius, and easy reversibility.

The common denominator in successful agent deployments wasn’t more sophisticated AI. It was more constrained AI. Narrow scope. Clear accountability. Human oversight on anything that couldn’t be trivially reversed.

The less-autonomous path forward

The trust collapse of 2024 is producing a counterintuitive realignment. The enterprises that are rebuilding confidence in AI agents are doing it by reducing autonomy, not increasing it.

This might look like retreat. It’s actually strategic discipline. An agent constrained to a narrow task domain with clear monitoring, defined rollback procedures, and human approval for high-stakes actions is an agent that earns trust through demonstrated reliability. An agent given broad autonomy across multiple systems without those controls is an agent waiting to produce the next trust-shattering failure.

As Deasey-Weinstein put it: “This is neither saleable nor attractive to the ever-demanding consumer that wants more work done for less time and skill.” She’s right. Narrowly scoped agents with extensive guardrails don’t make exciting vendor presentations. But they’re the agents that survive contact with production.

The practical path forward for any organization grappling with the trust deficit starts with an honest audit. Review your AI systems’ autonomy levels and reduce where trust hasn’t been earned. Implement explainability dashboards that show what the agent did and why, not for regulators, but for the operational teams that need to understand and trust the system. Create human-in-the-loop checkpoints for high-stakes decisions. Report AI system performance honestly to leadership, including failure rates and near-miss incidents, not just success metrics. And build trust incrementally: start with AI-assisted workflows, graduate to AI-augmented workflows, and only advance to AI-automated workflows after the assisted and augmented phases have demonstrated reliability.

The race for autonomy is over. The race for trust has begun.

The numbers are unambiguous. Executive confidence in autonomous AI agents dropped by nearly half in 2024. Adoption continued despite, or perhaps because of, a competitive dynamic that penalized patience. The organizations that rushed to deploy broad autonomy are now dealing with the consequences of that speed.

The next chapter of enterprise AI won’t be defined by which organization has the most autonomous agents. It will be defined by which organization has the most trusted ones. And trust, unlike autonomy, can’t be deployed by updating a configuration file. It has to be earned through consistent, transparent, accountable performance over time.

The 43% to 27% decline wasn’t a market failure. It was a correction. The market briefly mistook capability for readiness. Now it’s learning the difference.