95% Failure or 95% Untapped Potential?
The MIT study that freaked everyone out, and misguided the market and AI's progress
Last week, Fortune lit up the feeds with a headline straight out of an AI bear’s dream: “95% of generative AI pilots are failing, says MIT study.” Cue the LinkedIn thinkfluencers, Twitter shortsellers, and CNBC chyrons declaring that the AI bubble had popped. Womp womp.
The media response was almost comical. The “study” itself was a handful of interviews, a small survey, and a scan of public press releases — more vibes than science. If a company didn’t issue a glowing AI productivity press release, it was counted as a failure. By that logic, most of corporate America is failing at everything all the time. Yet in this jittery market, the headline was enough to lop billions off of AI stocks.
The Market Overreacts (Again)
The media response echoed the DeepSeek launch of its open-source models, and NVIDIA stock instantly cratered. Commentators declared “the end of NVIDIA” — as if improved training efficiency in Beijing erased the global demand for compute. What actually happened was the opposite: DeepSeek proved that the AI revolution was spreading faster and wider, and that demand for compute would be more global, more intense, and more durable than anyone had priced in.
This MIT report is a rerun of the same movie. If you take it at face value, it doesn’t mean AI is useless — it means the Fortune 2000 has barely scratched the surface of adoption. If anything, that suggests future revenue for AI infrastructure and tooling companies is underestimated, not inflated.
Why Pilots Actually Fail
I’ve written before about the gap between AI hype and enterprise reality. GenAI is fantastically easy to demo and painfully hard to make core to a business. A five-minute “copilot” demo feels magical. Embedding that magic into workflows, data systems, and compliance environments? That’s where most organizations hit the wall.
From what I’ve seen across government and commercial deployments, the failure modes aren’t exotic:
Orchstration/Data friction — messy, siloed, permissioned data that no LLM wrapper can magically fix.
Leadership buy-in and change gaps — pilots die without executive sponsorship, or employees reject tools they see as “training their replacement.”
Problem misfit — shiny demos without clear KPIs or baselines.
The MIT study actually hints at this — employees are adopting AI en masse on their own, while formal company pilots stall. The “shadow AI economy” is real: 90% of employees report using AI daily, even though only 40% of companies have formal subscriptions. The tech is working for individuals, but organizations haven’t yet reorganized to capture that value at scale.
An Adoption Maturity Curve for Organizations
One way to think about this is in stages:
AI-Aware: dabbling in chat interfaces, content generation, and developer tools. Cool for writing emails or brainstorming, but no connection to enterprise data or systems.
AI-Native: moving beyond awareness — embedding AI, coding, and reasoning models, and specialized tools with security and governance in place.
AI-Core: the endgame — mapping real business processes into agentic workflows. Not just copilots bolted onto old systems, but AI agents orchestrating tasks across systems of record, automating easy functions first, and expanding into more complex missions over time.
Most of corporate America is still in the “awareness” stage. That’s fine — every technological revolution starts with curiosity. But the leap to AI-native and AI-core is where the productivity gains compound and the stock multiples follow.
The Real Headline
So, is 95% of AI “failing”? No. What’s failing is enterprise adoption — a predictable growing pain of a technology that moves faster than organizations can reorganize. The Fortune headline should have read:
“95% of enterprises haven’t even started capturing the value of AI.”
That’s not a bearish signal. It’s the opposite.



Meanwhile, founders need to understand that the deeper their AI is integrated into the workflow, the more durable their ARR. If the AI is just providing insights, it may not matter how much efficiency it's generating if things go south and the customer needs to conserve cash. If your product is embedded in the workflow, you're much harder to drop.