42% of Enterprises Abandoned Most AI Initiatives. Here’s What the Survivors Did Differently

Forty-two percent of enterprises abandoned the majority of their AI initiatives. The survivors share a pattern: they treated AI as an infrastructure investment, not a project. The difference is organizational, not technical.
42% of Enterprises Abandoned Most AI Initiatives. Here’s What the Survivors Did Differently

The most important number in enterprise AI in 2024 isn’t the size of the latest funding round or the parameter count of the newest model. It’s 42%.

That’s the share of companies that abandoned the majority of their AI initiatives before they reached production, according to S&P Global Market Intelligence’s 2025 survey of more than 1,000 enterprises across North America and Europe. The previous year, that number was 17%.

Let that sink in. The abandonment rate didn’t increase 50%. It didn’t double. It more than doubled, a 147% jump, during a period when AI investment simultaneously increased by roughly 78% according to McKinsey’s annual survey. More money in. More failures out. The enterprise AI market in 2024 had the curious distinction of setting records in both investment and waste simultaneously.

The average organization scrapped 46% of its AI proof-of-concepts before deployment. Nearly half of everything enterprises tried didn’t make it to production. Companies cited cost, data privacy, and security risks as the top obstacles. But the real failure pattern runs deeper than any of those surface explanations, and the organizations that succeeded share specific structural differences that their failed counterparts lacked.

The investment-abandonment paradox

The simultaneous acceleration of AI investment and AI abandonment is not a contradiction. It’s a predictable consequence of how enterprises typically respond to transformative technology: they fund broadly and evaluate late.

Consider the pattern. Late 2022 and throughout 2023, generative AI captured executive attention in a way that few technologies have. Board decks featured AI. Strategy documents referenced AI. Budget cycles allocated to AI. The funding was driven by a combination of genuine excitement, competitive anxiety (“our competitors are investing”), and FOMO that was actively stoked by a vendor ecosystem selling AI as an existential imperative.

The funding created pilots. Lots of pilots. S&P Global’s data showed that 60% of organizations investing in AI had implemented generative AI technologies, a 13-percentage-point increase from prior forecasts. The technology was being adopted faster than any previous enterprise technology platform.

But pilots aren’t products. A proof-of-concept that works on curated data, in a sandbox environment, with dedicated engineering attention, demonstrates capability. It does not demonstrate scalability, governance readiness, integration viability, or business value. The gap between “this demo is impressive” and “this is generating measurable P&L impact” is where 46% of AI initiatives went to die.

MIT’s Project NANDA provided the most rigorous quantification of this gap. Their research found that only 5% of enterprise AI pilots demonstrated measurable profit-and-loss impact. Not 50%. Not 25%. Five percent. The other 95% either showed no measurable business impact or were abandoned before impact could be assessed.

That finding should reframe every board conversation about AI investment. The question isn’t whether AI works (it does). The question is whether your organization has the structural capability to turn working AI into business value. For 95% of pilot programs, the answer was no.

Three patterns that separate survivors from casualties

The organizations that successfully moved AI from pilot to production and measured genuine business impact weren’t using different models or different cloud providers. They made different structural decisions at the project level that determined outcomes before the first line of code was written.

Pattern one: buy instead of build. The most counterintuitive finding from the combined research is that enterprises building custom AI solutions failed at dramatically higher rates than those deploying pre-built, vertically specialized solutions. The ratio was stark: approximately 67% success rate for “buy” approaches versus 22% for “build” approaches.

This contradicts the prevailing narrative in enterprise AI, which emphasizes differentiation through custom models and proprietary training data. The reality is that most enterprise AI use cases (customer service optimization, demand forecasting, document processing, code generation) are not unique enough to justify custom development. The workflows may be specific. The underlying AI capability is commodity.

The organizations that succeeded with a build approach were those with genuinely unique data assets or domain-specific requirements that no vendor could address. They were the exception, not the rule. For the majority, the most effective strategy was to identify specialized vertical solutions that addressed their specific workflow and integrate them, rather than attempting to build a horizontal AI platform from scratch.

Forrester’s analysis reached a similar conclusion through different methodology, predicting that 75% of firms attempting to build their own agentic AI architectures would fail. The systems were, in Forrester’s assessment, “too convoluted,” requiring diverse model orchestration, sophisticated retrieval-augmented generation stacks, advanced data architectures, and niche domain expertise that most enterprises simply don’t have in-house.

Pattern two: data readiness before models. The successful 5% invested disproportionately in data infrastructure before deploying AI models. The consistent pattern was spending 50-70% of the project timeline and budget on data readiness: extraction, normalization, governance, quality monitoring, and retention controls, before touching the model itself.

This is the opposite of what most enterprise AI projects do. The typical project starts with the model (usually GPT-4, because it’s the default), builds a demo on clean sample data, gets executive approval based on the demo, then encounters production data (messy, inconsistent, siloed, poorly documented) and stalls.

Informatica’s CDO Insights 2025 survey identified the top obstacles to AI success: data quality and readiness (43%), lack of technical maturity (43%), and shortage of skills (35%). Notice what’s not on the list: model capability. The models work. The data doesn’t.

The organizations that succeeded treated data readiness as the primary deliverable of the first half of the project, with model deployment as the deliverable of the second half. They asked “is our data ready for AI?” before they asked “which AI model should we use?” This inversion of the typical sequence was the single largest structural differentiator.

Pattern three: one workflow, proved, then expanded. The surviving AI initiatives focused on a single, well-defined business workflow and proved measurable value before expanding. The failed initiatives attempted to build horizontal AI platforms that could serve multiple use cases simultaneously.

The logic of the platform approach is seductive: invest once, reuse everywhere. The reality is that AI implementations are fiercely context-dependent. The data pipelines, integration points, governance requirements, and success metrics for an AI-powered customer service workflow are fundamentally different from those for an AI-powered supply chain optimization workflow. Attempting to serve both from a shared platform means optimizing for neither.

The successful organizations picked one workflow where they could measure impact directly (cost reduction, revenue increase, cycle time improvement, error rate reduction) and focused relentlessly on that single use case until it was in production, governed, and demonstrably valuable. Only then did they extract generalizable lessons and apply them to the next workflow.

This approach is slower. It produces fewer impressive demo days. It generates less internal excitement. But it produces AI systems that actually work in production and generate measurable business value, which is what the 42% failed to achieve.

The “AI fatigue” factor

There’s a human dimension to the abandonment rate that the statistics don’t fully capture, but that Fortune’s reporting on the S&P data highlighted: AI fatigue.

The pressure to demonstrate AI progress created a cycle of pilot launches, demo days, executive presentations, and incremental improvements that consumed engineering bandwidth without producing business outcomes. Quantum Workplace research found that employees who considered themselves frequent AI users reported higher levels of burnout (45%) compared to infrequent users (38%) or non-users (35%).

The burnout wasn’t from the AI itself. It was from the organizational churn: the constant re-scoping, re-prioritization, and context-switching that characterize pilot-heavy, production-light AI programs. Teams were launching new experiments faster than they could evaluate old ones. The organizational metabolism was optimized for initiation, not completion.

Eoin Hinchy, co-founder and CEO of workflow automation company Tines, described the dynamic from his own experience: “There were certainly moments when we felt like we’d cracked it and, yes, this is it. This is the feature that we need. This is going to be the big-step change, only for us to realize, actually, no, we need to go back to the drawing board.” Tines had 70 failures with an AI initiative over the course of a year before landing on a successful iteration.

Seventy failures isn’t unusual. What’s unusual is that Tines persisted through them with focused discipline rather than pivoting to the next shiny use case. Most enterprise AI programs don’t have that patience.

The practitioner’s honest accounting

I’ve spent 17 years building enterprise systems, and I’ve watched this pattern before. Not with AI specifically, but with every major technology wave: cloud, mobile, big data, IoT. The curve is always the same. Excitement creates funding. Funding creates pilots. Pilots create complexity. Complexity creates failure. Failure creates disillusionment. And then, eventually, the organizations that survived the disillusionment emerge with genuine, production-grade capabilities that actually deliver value.

What’s different about AI is the speed and the stakes. The funding curve was steeper, the pilot proliferation was faster, and the gap between demo capability and production capability is wider than anything I’ve seen in previous technology cycles.

Through my work on the ACM AISec Program Committee and in evaluating hundreds of industry award submissions annually, I see both the successes and the failures up close. The failures share a common DNA: they started with the technology, asked “what can AI do?”, and went looking for problems. The successes started with a business problem, asked “does AI solve this better than the alternative?”, and only proceeded when the answer was clearly yes.

That distinction, technology-first versus problem-first, sounds simplistic. In practice, it determines everything about how the project is scoped, staffed, funded, and measured. And it’s the distinction that the 42% got wrong.

What your AI portfolio review should look like

If your organization is among the 58% that haven’t abandoned most AI initiatives, congratulations, but the work isn’t done. The S&P data suggests that many organizations haven’t abandoned their initiatives yet; they’ve simply not evaluated them honestly.

Here’s the portfolio review that every CIO and CTO should conduct quarterly.

For every active AI initiative, answer five questions. What specific business metric does this initiative improve? Who is measuring that metric, and how often? What is the 6-month kill criterion, the threshold below which this initiative will be terminated? What percentage of the budget has been spent on data readiness versus model development? Is this initiative building something custom, or deploying something that exists?

Calculate your own abandonment rate. Divide the number of AI projects started in the last 24 months by the number currently in production generating measurable value. If your number is worse than the S&P average (46% abandoned before production), you have a portfolio problem, not an AI problem.

Reverse your build-versus-buy ratio. If more than 30% of your AI initiatives involve custom model development, audit each one to determine whether a commercially available vertical solution could achieve 80% of the outcome at 20% of the cost. For most use cases, it can.

Front-load data readiness investment. For every new AI initiative, require that 50% of the first phase budget be spent on data quality, pipeline construction, and governance before any model training begins. This will slow down your pilot launch cadence. That’s the point.

Kill horizontally, invest vertically. If you have a “platform” AI team building infrastructure that multiple business units will someday use, evaluate whether that platform is delivering value to any business unit today. If it isn’t, redirect the investment to vertical initiatives with near-term, measurable business outcomes.

The coming bifurcation

The 42% abandonment rate is not the end of the story. It’s the beginning of a market bifurcation that will define enterprise AI for the next several years.

On one side: organizations that learned from the failure wave, restructured their AI investments around the survivor patterns (buy over build, data readiness first, focused workflow execution) and are building genuine, production-grade AI capabilities that generate measurable returns.

On the other side: organizations that interpreted the failure wave as evidence that “AI doesn’t work” and pulled back investment, ceding competitive advantage to those who persisted with discipline.

The technology works. It demonstrably, measurably works, for the 5% of pilots that were structured correctly. The 95% failure rate is not a verdict on AI capability. It’s a verdict on enterprise execution. The organizations that understand that distinction will survive the trough and emerge stronger. The ones that don’t will wonder, in three years, how their competitors got so far ahead.

The S&P data says 42% have already walked away. The question for the remaining 58% is whether they’re in the 5% that’s building something real, or the 53% that just hasn’t given up yet.