The Gap Between AI-Readiness and Using AI Effectively

Business AI adoption crossed 50% for the first time in March 2026, up from 35% a year earlier. Alongside that figure sits another finding worth reflecting on. AI capabilities themselves are advancing sharply.

Business AI adoption crossed 50% for the first time in March 2026, up from 35% a year earlier. Alongside that figure sits another finding worth reflecting on. AI capabilities themselves are advancing sharply. The Epoch Capabilities Index found that AI progress grew at double the speed over the past two years compared to two years prior, largely due to reasoning models and reinforcement learning becoming the standard practice for AI development.

Source: Epoch AI

However, taking a broad adoption statistic in isolation tends to flatten important distinctions. It doesn't separate a team that has fully restructured its content workflow around AI from one that has simply added an editorial tool to an existing process. Nor does it distinguish between a DMO with a clear governance framework and one where staff are using personal AI accounts under the radar to complete their tasks. Those differences take time to surface, showing up eventually in the consistency of outputs, in the ability to scale and in whether AI ends up embedded in how an organisation operates or quietly set aside when initial enthusiasm fades.

For organisations still at the early stages of their AI integration journey, the gap between AI capabilities and what teams are equipped to work with will continue to widen while the important groundwork is being laid. The challenge this presents for DMOs is how to carefully rethink the way people and processes are positioned to leverage AI, while also having the strategic foresight to establish a broader vision of what shape the next phase of AI development will take.

The Pressure to Act

Leadership teams are now under real pressure to show that AI is driving clear organisational benefits, with 80% of marketers feeling pressured to integrate AI into workflows. This tends to produce a typical pattern of championing more pilots and measuring activity by output speed rather than by whether the organisation itself is structured to gain lasting value. Such a perspective creates significant organisational risks.

Gartner’s 2026 Future of Work Trends identified 'workslop' as the top productivity drain for organisations. The findings put a name to something many teams are already experiencing. When AI tools are deployed at pace without shared standards or organised knowledge to draw on, the outputs require more human effort to fix than they saved in the first place. For DMO teams balancing multiple simultaneous responsibilities, that dynamic compounds quickly.

Instead, a focus on the human side of AI adoption is a much more significant focus area when it comes to ensuring employees have strong AI literacy skills to undertake a more collaborative way of working with AI. Without this, destinations risk their unique brand voice being diluted into something that sounds very generic. Knowing when to question AI's strategic rationale and double-checking facts becomes equally important as the actual prompt itself. This means employees need to understand that they have a dual role when working with AI, both in providing a clear and direct input and ensuring that the output is both logical and coherent.

The viral browser game "Your AI Slop Bores Me" is the perfect demonstration of this human-centric component to AI-readiness. When people roleplay as an AI chatbot, what became most apparent was how the personal touch, humour and unpredictability of these responses were so much more engaging, specific and interesting than you typically find from an AI response. When you step back to think about it, this is simply a reflection of the training that the LLM has been exposed to. When a model has nothing specific to draw on, it fills the gap with generalities.

Source: Your AI Slop Bores Me

Preparing to integrate AI is not a quick win. The true value doesn't come from simply subscribing to an 'off-the-shelf' tool, but from painstakingly teaching an LLM to understand exactly what it needs to deliver. This means moving beyond continually adding basic prompts to transitioning towards building a deep 'memory bank' of project knowledge and context. Before you can deploy a custom GPT or a suite of AI agents to handle specific tasks, you must first plan how to feed the system the right information to get it into a truly capable state. This means treating AI as if it were a 'digital colleague' that needs a clear onboarding process.

This journey to AI-readiness is a cyclical process, requiring both patience and a solid understanding of what is happening in the backend. Unfortunately, the deafening hype surrounding AI often gets the better of management teams, leading to a misplaced expectation of 'instant wins' and overnight transformations. To navigate this, clear communication and transparency are vital to avoid a series of expensive frustrations that ultimately only result in a disrupted organisational culture and employee backlash.

Championing Employee Wellbeing

The data on how employees are responding to AI integration adds a psychological layer that strategy discussions often skip. An alarming statistic is that 29% of employees admit to actively working against their company’s AI strategy, rising to 44% among Gen Z workers. A major driver of this employee backlash is the fear of becoming obsolete. This is made substantially worse by 'AI washing', where companies blame AI-enabled efficiencies as the reason for laying off staff. Yet, while the headline figure draws attention, the more telling finding is that 75% of executives admitted their company’s AI strategy was more for show than a meaningful guide to outcomes.

The lack of a substantive AI policy and employee resistance to its integration point to a hidden cost. AI, and the performance pressure that surrounds its use, is increasingly becoming a major disruptor to the mental fitness of the workforce. With the psychological pressure on employees evolving rapidly as AI becomes pervasive across daily work, the effects must be clearly and routinely evaluated. At the DTTT, we've developed an AI Organisational Wellbeing Instrument to measure and monitor five different focus areas related to the impact of AI use on employee health and mental wellbeing.

Source: AI Transparency Framework

On the other hand, it's also noteworthy to highlight how AI super-users are five times more productive and three times more likely to have received a promotion or pay rise in the past year. This divide is only set to widen further. The World Economic Forum’s (WEF) Future of Jobs Report 2025 found that AI and big data top the list of the fastest-growing skills and that 63% of employers cite the skills gap as their primary barrier to AI adoption.

More worryingly, the WEF found that, on average, workers can expect two-fifths of their existing skillsets to be transformed or become outdated between 2025 and 2030. Destinations that approach AI capability as a change management challenge are the ones that will make the most progress in having an engaged and motivated workforce. Ultimately, this depends on having an advanced learning development pathway, supported by AI champions advocating for 'why' and 'how' AI is beneficial.

That scale of change requires organisations to examine how work is structured and where the knowledge that underpins it actually lives. Understanding what a working knowledge base requires, what belongs in it and what it then makes possible changes the scope of what AI can do for a destination. It is also the point at which tools designed to connect AI systems directly to organised internal knowledge, such as Model Context Protocol servers, begin to make operational sense.

What Progress Looks Like

The destinations making the most progress on AI-readiness rarely started with a strategy document. Many began by asking a more practical question. What does AI actually have access to when we use it? X. Design Week 2026, taking place in Brussels on 2-4 June, opens with this challenge. The first day is focused solely on using AI to drive internal transformation, working through capability, governance and knowledge systems with expert input and peer exchange.

Getting governance in place and organising internal knowledge are rarely the most visible investments a DMO can make. Yet, what they create is a base from which all subsequent work functions better. This tends to become apparent very quickly once the work is underway. Those conversations are also harder to have in isolation. Questions about sequencing and building highly specialised knowledge systems rarely get answered through individual effort alone. They require exchange with destinations that have faced the same decisions and can speak honestly about what followed.

Subscribe to our Newsletter

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Business AI adoption crossed 50% for the first time in March 2026, up from 35% a year earlier. Alongside that figure sits another finding worth reflecting on. AI capabilities themselves are advancing sharply. The Epoch Capabilities Index found that AI progress grew at double the speed over the past two years compared to two years prior, largely due to reasoning models and reinforcement learning becoming the standard practice for AI development.

Source: Epoch AI

However, taking a broad adoption statistic in isolation tends to flatten important distinctions. It doesn't separate a team that has fully restructured its content workflow around AI from one that has simply added an editorial tool to an existing process. Nor does it distinguish between a DMO with a clear governance framework and one where staff are using personal AI accounts under the radar to complete their tasks. Those differences take time to surface, showing up eventually in the consistency of outputs, in the ability to scale and in whether AI ends up embedded in how an organisation operates or quietly set aside when initial enthusiasm fades.

For organisations still at the early stages of their AI integration journey, the gap between AI capabilities and what teams are equipped to work with will continue to widen while the important groundwork is being laid. The challenge this presents for DMOs is how to carefully rethink the way people and processes are positioned to leverage AI, while also having the strategic foresight to establish a broader vision of what shape the next phase of AI development will take.

The Pressure to Act

Leadership teams are now under real pressure to show that AI is driving clear organisational benefits, with 80% of marketers feeling pressured to integrate AI into workflows. This tends to produce a typical pattern of championing more pilots and measuring activity by output speed rather than by whether the organisation itself is structured to gain lasting value. Such a perspective creates significant organisational risks.

Gartner’s 2026 Future of Work Trends identified 'workslop' as the top productivity drain for organisations. The findings put a name to something many teams are already experiencing. When AI tools are deployed at pace without shared standards or organised knowledge to draw on, the outputs require more human effort to fix than they saved in the first place. For DMO teams balancing multiple simultaneous responsibilities, that dynamic compounds quickly.

Instead, a focus on the human side of AI adoption is a much more significant focus area when it comes to ensuring employees have strong AI literacy skills to undertake a more collaborative way of working with AI. Without this, destinations risk their unique brand voice being diluted into something that sounds very generic. Knowing when to question AI's strategic rationale and double-checking facts becomes equally important as the actual prompt itself. This means employees need to understand that they have a dual role when working with AI, both in providing a clear and direct input and ensuring that the output is both logical and coherent.

The viral browser game "Your AI Slop Bores Me" is the perfect demonstration of this human-centric component to AI-readiness. When people roleplay as an AI chatbot, what became most apparent was how the personal touch, humour and unpredictability of these responses were so much more engaging, specific and interesting than you typically find from an AI response. When you step back to think about it, this is simply a reflection of the training that the LLM has been exposed to. When a model has nothing specific to draw on, it fills the gap with generalities.

Source: Your AI Slop Bores Me

Preparing to integrate AI is not a quick win. The true value doesn't come from simply subscribing to an 'off-the-shelf' tool, but from painstakingly teaching an LLM to understand exactly what it needs to deliver. This means moving beyond continually adding basic prompts to transitioning towards building a deep 'memory bank' of project knowledge and context. Before you can deploy a custom GPT or a suite of AI agents to handle specific tasks, you must first plan how to feed the system the right information to get it into a truly capable state. This means treating AI as if it were a 'digital colleague' that needs a clear onboarding process.

This journey to AI-readiness is a cyclical process, requiring both patience and a solid understanding of what is happening in the backend. Unfortunately, the deafening hype surrounding AI often gets the better of management teams, leading to a misplaced expectation of 'instant wins' and overnight transformations. To navigate this, clear communication and transparency are vital to avoid a series of expensive frustrations that ultimately only result in a disrupted organisational culture and employee backlash.

Championing Employee Wellbeing

The data on how employees are responding to AI integration adds a psychological layer that strategy discussions often skip. An alarming statistic is that 29% of employees admit to actively working against their company’s AI strategy, rising to 44% among Gen Z workers. A major driver of this employee backlash is the fear of becoming obsolete. This is made substantially worse by 'AI washing', where companies blame AI-enabled efficiencies as the reason for laying off staff. Yet, while the headline figure draws attention, the more telling finding is that 75% of executives admitted their company’s AI strategy was more for show than a meaningful guide to outcomes.

The lack of a substantive AI policy and employee resistance to its integration point to a hidden cost. AI, and the performance pressure that surrounds its use, is increasingly becoming a major disruptor to the mental fitness of the workforce. With the psychological pressure on employees evolving rapidly as AI becomes pervasive across daily work, the effects must be clearly and routinely evaluated. At the DTTT, we've developed an AI Organisational Wellbeing Instrument to measure and monitor five different focus areas related to the impact of AI use on employee health and mental wellbeing.

Source: AI Transparency Framework

On the other hand, it's also noteworthy to highlight how AI super-users are five times more productive and three times more likely to have received a promotion or pay rise in the past year. This divide is only set to widen further. The World Economic Forum’s (WEF) Future of Jobs Report 2025 found that AI and big data top the list of the fastest-growing skills and that 63% of employers cite the skills gap as their primary barrier to AI adoption.

More worryingly, the WEF found that, on average, workers can expect two-fifths of their existing skillsets to be transformed or become outdated between 2025 and 2030. Destinations that approach AI capability as a change management challenge are the ones that will make the most progress in having an engaged and motivated workforce. Ultimately, this depends on having an advanced learning development pathway, supported by AI champions advocating for 'why' and 'how' AI is beneficial.

That scale of change requires organisations to examine how work is structured and where the knowledge that underpins it actually lives. Understanding what a working knowledge base requires, what belongs in it and what it then makes possible changes the scope of what AI can do for a destination. It is also the point at which tools designed to connect AI systems directly to organised internal knowledge, such as Model Context Protocol servers, begin to make operational sense.

What Progress Looks Like

The destinations making the most progress on AI-readiness rarely started with a strategy document. Many began by asking a more practical question. What does AI actually have access to when we use it? X. Design Week 2026, taking place in Brussels on 2-4 June, opens with this challenge. The first day is focused solely on using AI to drive internal transformation, working through capability, governance and knowledge systems with expert input and peer exchange.

Getting governance in place and organising internal knowledge are rarely the most visible investments a DMO can make. Yet, what they create is a base from which all subsequent work functions better. This tends to become apparent very quickly once the work is underway. Those conversations are also harder to have in isolation. Questions about sequencing and building highly specialised knowledge systems rarely get answered through individual effort alone. They require exchange with destinations that have faced the same decisions and can speak honestly about what followed.