Produced by MIT NANDA (Networked Agents and Decentralized Architecture) and published via MLQ, this research examines why enterprise GenAI is failing to deliver value at scale despite substantial investment. Based on a survey of 153 leaders, 52 in-depth interviews and analysis of over 300 public AI implementations, the findings are stark: 95% of organisations are getting zero measurable return on their GenAI pilots.
The authors define this as the GenAI Divide. Over 80% of firms have explored or piloted GenAI tools, and nearly 40% report deployment. But these tools primarily enhance individual productivity rather than business performance. Meanwhile, enterprise-grade custom implementations are quietly failing at scale. The core barrier, the research argues, is not model quality, infrastructure or regulation. It is learning: most GenAI systems do not retain feedback, adapt to context or improve over time.
The report identifies four patterns that define which side of the divide an organisation falls on: the degree of structural disruption by sector, whether large firms are converting pilot volume into scale, where AI budgets are focused, and whether implementation is handled internally or through external partnerships. External partnerships see twice the success rate of internal builds.
For destination organisations and DMOs evaluating their AI programmes, the report is a useful corrective to hype, grounding the conversation in what distinguishes the 5% succeeding from the 95% that are not.
Produced by MIT NANDA (Networked Agents and Decentralized Architecture) and published via MLQ, this research examines why enterprise GenAI is failing to deliver value at scale despite substantial investment. Based on a survey of 153 leaders, 52 in-depth interviews and analysis of over 300 public AI implementations, the findings are stark: 95% of organisations are getting zero measurable return on their GenAI pilots.
The authors define this as the GenAI Divide. Over 80% of firms have explored or piloted GenAI tools, and nearly 40% report deployment. But these tools primarily enhance individual productivity rather than business performance. Meanwhile, enterprise-grade custom implementations are quietly failing at scale. The core barrier, the research argues, is not model quality, infrastructure or regulation. It is learning: most GenAI systems do not retain feedback, adapt to context or improve over time.
The report identifies four patterns that define which side of the divide an organisation falls on: the degree of structural disruption by sector, whether large firms are converting pilot volume into scale, where AI budgets are focused, and whether implementation is handled internally or through external partnerships. External partnerships see twice the success rate of internal builds.
For destination organisations and DMOs evaluating their AI programmes, the report is a useful corrective to hype, grounding the conversation in what distinguishes the 5% succeeding from the 95% that are not.