6 minute read

The “95% of generative AI pilots are failing” headline showed up everywhere this week. I wanted to understand what was behind the claim, so I listened to the AI Daily Brief episode that broke down the underlying MIT study. If you’re going to listen to anything AI this week, I’d recommend this episode. It gives you the details on the study, and also does an excellent job highlighting real ways that generative AI project implementations fail. And despite the study’s shortcomings, there are lots of things you need to get right as an organization to truly get value from AI.

The Study’s Methodological Issues

The AI Daily Brief episode breaks down several issues with the study:

  • Small, opaque sample: The research is based on 52 interviews and ~150 survey responses, but we don’t know who these executives were or what companies they represented.

  • Success defined by public announcements: The study defines “success” as companies publicly announcing P&L impact. If there’s no press release claiming productivity gains, it counts as failure. This misses organizations that see value but don’t publicize it.

  • Skewed sample: They claim 50-70% of AI spending goes to sales and marketing, which doesn’t match what I’ve seen in this space. Key use cases like coding assistants, agentic workflows, report generation, and research are largely missing from the analysis.

There are other issues the episode identified that were problematic, but these are the ones that resonated with me the most. Based on what I know now, it’s clear the study was taken completely out of context. It’s a pretty surface-level study of what’s actually occurring, without a lot of rigor around the methodology.

What the Study Actually Shows

However, the study does capture some patterns worth understanding.

The most telling finding: while only 40% of companies bought LLM subscriptions, 90% of employees regularly use AI tools through personal accounts. This gap shows that GenAI adoption isn’t as organized as it should be. I’m 100% aligned with organic experimentation driving real use cases - people figuring out what works by trying things. But you need follow-through on that experimentation to align the entire organization around adoption.

A key insight is that individual productivity gains don’t automatically translate to organizational P&L impact. When employees become more efficient but organizations don’t reallocate resources or increase output, those gains stay invisible to finance teams, and are hard to truly measure. So getting the operational aspects of adoption ‘right’ is the true critical aspect of GenAI adoption - which is why I really loved this podcast episode.

The Organizational Barriers Are Real (And Not Specific to AI)

The podcast goes through real barriers teams experience that drive implementation failure. And it’s coming from a company that works with thousands of clients to help them implement AI. These failure modes aren’t reasons to avoid these prototypes or stop trying this technology. It’s a warning list of what you shouldn’t do if you want to get value out of these tools.

I’m super bullish on generative AI solving major problems that big companies have - these tools can help you do things that were - quite simply - impossible to do before. I’ve already seen these tools make major transformations on teams. But like any change, you have to manage it well. Many of these aren’t specific to AI projects, but apply to organizational change in general. The episode covered way more reasons than I’m highlighting here - I picked the ones that resonated most around organizational and operational challenges:

Leadership Says They Want It But Won’t Fund It

Executives get excited about AI and tell someone to run a pilot, then don’t give them the resources to succeed. Without real executive ownership - someone who can clear blockers and commit funding - these efforts fizzle out. The organizations that work have CEO-level involvement, not just verbal support.

People Think They’re Training Their Replacement

You can see the tension when leadership talks about efficiency gains while employees wonder what happens to their jobs. Adoption improves dramatically when you explain how humans and AI will work together. People need to understand what their role looks like after implementation.

People Default to the Old Way of Working

People stick with familiar workflows even when better tools are available. You can mandate new tools all you want, but if people don’t see the personal benefit, they’ll revert to what they know. There’s a paradox where individuals adopt better tools for personal use while resisting organizational changes.

No Real Change Management

Pilots assume people will just figure it out, but you need training, internal champions, and updated procedures. Most pilot budgets ignore the human infrastructure needed to support new workflows. Without dedicated change management, organizations snap back to the old way of doing things.

No Plan for What Happens After the Pilot

Teams start pilots without defining what success looks like or how to scale it. Even when results are positive, there’s no owner, platform, or budget to take it to production. Good pilots die in the gap between “this works” and “this is how we work now.”

“It Feels Faster” Isn’t a Good Metric

When you don’t design concrete KPIs, you end up with anecdotes and screenshots instead of defensible impact. Teams say things “feel faster” without any before/after measurement. Without credible measurement, you can’t justify budget or overcome skepticism.

Individual Gains Don’t Become Organizational Gains

If people do their work 40% faster but you don’t reallocate resources or increase output, you won’t see the impact. That productivity gain disappears unless leadership makes operational changes. Proving individual value is different from proving organizational value.

Risk Departments That Don’t Want Any Risk

Different stakeholders have different views on what the organization should be doing with AI. Since it’s such a new technology, it’s sometimes hard to align all stakeholders during adoption. Over-broad policies end up blocking value creation, making pilots look weak because you’ve limited the capability.

Enterprise Tools Are Worse Than Consumer Tools

The leading edge is always going to be in the consumer space where people can try new features as soon as they roll out. There’s both a delay and implementation gap on the enterprise side - they’re often a few cycles behind what’s cutting edge. It’s really hard to deal with that difference when what you can use at home is so far ahead of the generic co-pilot at work.

Nobody Actually Owns the Outcome

You have to align the incentives and be direct about who’s responsible for what stages of the project. This is true of any project - you need ownership and someone to move it forward. Without that, pilots just drift even when they show promise.

What Leaders Should Actually Do

These barriers aren’t reasons to avoid AI - they’re problems you can solve. The technology works. I’ve seen it transform teams. When 90% of employees use AI tools while official pilots stall, that’s an organizational problem, not a technology problem.

The companies that succeed aren’t using different models - they’re organizing differently. Individual employees are already getting value from these tools daily. Your job is building organizational capability to capture that value.

Audit yourself against these failure modes:

  • Do you have real executive sponsorship with budget authority, or just someone saying AI is important?
  • Have you explained to your team how their roles change, or just that change is coming?
  • Are you measuring concrete KPIs with baselines, or relying on "feels faster"?
  • Do you have a plan for what happens after the pilot succeeds, or are you just hoping it works?
  • Are your enterprise AI tools competitive with what people use at home, or obviously inferior?

I thought the AI Daily Brief episode was really great. For people trying to understand what makes these technologies actually work in an organization, it’s a valuable listen. The organizational dynamics they cover will help you avoid common pitfalls.

These tools offer incredible opportunities to change how your business makes an impact. But it takes thoughtful organizational structure and leadership to make it happen. This is true of any major change, but especially true with AI given how rapidly the technology is evolving. You need to focus on both the technology and the process around adopting that technology to make things work.

Updated: