Author:
Massachusetts Institute of Technology
tempImagecrgWHO (1).webptempImagecrgWHO (1).webp
Language:
English

The GenAI Divide: State of AI in Business 2025

July 2025
Digital

Produced by MIT NANDA (Networked Agents and Decentralized Architecture) and published via MLQ, this research examines why enterprise GenAI is failing to deliver value at scale despite substantial investment. Based on a survey of 153 leaders, 52 in-depth interviews and analysis of over 300 public AI implementations, the findings are stark: 95% of organisations are getting zero measurable return on their GenAI pilots.

The authors define this as the GenAI Divide. Over 80% of firms have explored or piloted GenAI tools, and nearly 40% report deployment. But these tools primarily enhance individual productivity rather than business performance. Meanwhile, enterprise-grade custom implementations are quietly failing at scale. The core barrier, the research argues, is not model quality, infrastructure or regulation. It is learning: most GenAI systems do not retain feedback, adapt to context or improve over time.

The report identifies four patterns that define which side of the divide an organisation falls on: the degree of structural disruption by sector, whether large firms are converting pilot volume into scale, where AI budgets are focused, and whether implementation is handled internally or through external partnerships. External partnerships see twice the success rate of internal builds.

For destination organisations and DMOs evaluating their AI programmes, the report is a useful corrective to hype, grounding the conversation in what distinguishes the 5% succeeding from the 95% that are not.

Contents:

  • Executive summary and the GenAI Divide defined
  • Research methodology (300+ implementations, 52 interviews, 153 surveys)
  • Why GenAI pilots fail: top barriers to scaling
  • Industry-level disruption analysis and AI Market Disruption Index
  • Buyer and builder patterns: what the winners do differently
  • Workforce impacts and hiring implications
  • Vendor landscape: adaptive systems vs generic tools
  • Conclusions and recommendations

Continue reading...

Get access to 100s of case studies, workshop templates, industry leading events and more.
See membership options
Already a member? Sign in

The GenAI Divide: State of AI in Business 2025

July 2025
Digital

Produced by MIT NANDA (Networked Agents and Decentralized Architecture) and published via MLQ, this research examines why enterprise GenAI is failing to deliver value at scale despite substantial investment. Based on a survey of 153 leaders, 52 in-depth interviews and analysis of over 300 public AI implementations, the findings are stark: 95% of organisations are getting zero measurable return on their GenAI pilots.

The authors define this as the GenAI Divide. Over 80% of firms have explored or piloted GenAI tools, and nearly 40% report deployment. But these tools primarily enhance individual productivity rather than business performance. Meanwhile, enterprise-grade custom implementations are quietly failing at scale. The core barrier, the research argues, is not model quality, infrastructure or regulation. It is learning: most GenAI systems do not retain feedback, adapt to context or improve over time.

The report identifies four patterns that define which side of the divide an organisation falls on: the degree of structural disruption by sector, whether large firms are converting pilot volume into scale, where AI budgets are focused, and whether implementation is handled internally or through external partnerships. External partnerships see twice the success rate of internal builds.

For destination organisations and DMOs evaluating their AI programmes, the report is a useful corrective to hype, grounding the conversation in what distinguishes the 5% succeeding from the 95% that are not.

Contents:

  • Executive summary and the GenAI Divide defined
  • Research methodology (300+ implementations, 52 interviews, 153 surveys)
  • Why GenAI pilots fail: top barriers to scaling
  • Industry-level disruption analysis and AI Market Disruption Index
  • Buyer and builder patterns: what the winners do differently
  • Workforce impacts and hiring implications
  • Vendor landscape: adaptive systems vs generic tools
  • Conclusions and recommendations