Beyond the Algorithm: AI, Trust and Human Connection

At ITB Berlin, a panel gathered to explore precisely this question through the lens of real operational decisions, emerging research and the concrete trade-offs that businesses are navigating right now.

The promise of leveraging AI in tourism is well documented. Less examined is whether that promise is being fulfilled in a way that guests, employees and destinations can actually trust. At ITB Berlin, a panel gathered to explore precisely this question through the lens of real operational decisions, emerging research and the concrete trade-offs that businesses are navigating right now.

The session, moderated by Dr Jessika Weber-Sabil, Professor of Digital Transformation for Cultural Tourism at Breda University of Applied Sciences, brought together Laurenz Schwarzhappel, Co-Founder and CEO of Global Living Apartments; Claire Robinson, Co-Founder and Co-CEO of Region Lovers; Katerina Volchek, Professor at Deggendorf Institute of Technology; Klaas Koerten, Senior Researcher in Robotics at Hotelschool The Hague; and Nick Hall, Co-Founder & CEO of the Digital Tourism Think Tank.

Their starting point was considering trust. Who holds it? What damages it? What does the tourism sector need to do differently to earn and keep it as AI becomes embedded in everyday operations? The tensions surfaced between efficiency and authenticity, transparency and convenience, human judgment and automated scale, are exactly the tensions that destination leaders will need to navigate in the years ahead. This piece draws on that conversation to identify where the most important decisions lie.

Shifting Expectations

One of the clearest signals from the discussion is that guest expectations around communication have already shifted faster than many operators realise. Laurenz described an operation where 90% of guest interactions are handled without any human involvement because guests are no longer requesting it for straightforward queries. Questions are resolved instantly and guests move on. The 10% who do seek human contact can reach someone at any hour. This approach demonstrates how the threshold for when human contact is necessary has changed considerably, enhancing the degree of flexibility within business operations.

At the same time, however, this creates a new kind of expectation management. When AI fails to provide a seamless experience, or when its presence is not made clear to guests, trust erodes quickly. Klaas' example of Airbnb's automatic translation illustrates just how fine this line can be. Having received messages from his Airbnb host in his native Dutch and feeling genuinely moved by the effort, on arrival, he discovered that the host spoke no Dutch at all and had no idea the platform was translating every exchange. A warm, apparently authentic connection flipped in an instant into something unsettling. The attribution of human intent to an automated process and the lack of transparency in the process harmed the nature of the interaction entirely.

While AI presents a strong opportunity to enhance visitor experiences, transparency is the mechanism that moderates this. If guests do not know they are speaking to a machine, they cannot make an informed choice about whether they want to. This is precisely the gap that the DTTT's newly launched AI Transparency Framework addresses. Unveiled at ITB Berlin, the framework gives a practical structure for disclosing how AI has been used in creating content, delivering services and generating outputs.

Source: DTTT AI Transparency Framework

At a moment when the industry is accelerating its AI adoption faster than its governance frameworks, having a clear model for communicating AI use is an operational prerequisite. As Nick put it, responsible AI use is something to talk about openly, not to obscure, because whoever is on the receiving end of that capability has the right to know where it came from and what it means for them. Understanding what transparency requires, though, first demands a clearer sense of what authenticity actually means in an AI-assisted operation.

The Meaning of Authenticity

The word "authenticity" circulates constantly in tourism, but it is often used loosely. Claire offered a sharper definition. "Authenticity doesn't mean human", she argued. "It means it's genuine and not copied". When the data that feeds a large language model: the hotel's own knowledge, the area's character and the organisation's values is genuine, the output is authentic to that organisation's voice regardless of the mechanism that delivers it.

This reframes the debate considerably. An AI-generated response that draws on a destination's own knowledge base, values and local insight can be more authentic to that destination's voice than a generic response. The delivery method changes, yet the integrity of the content does not have to.

The critical variable is data quality. Claire was emphatic: if you do not give the AI good data, it will not be able to give trustworthy answers. This is an argument for destinations and operators to invest seriously in the quality of their data rather than the novelty of their tools. A compelling illustration of what good data makes possible is accessibility trip planning. If a visitor planning a trip with a child on the autistic spectrum asks a generic AI about a museum, it might surface a mention of a designated entrance for people with reduced mobility and, if they are lucky, a comment buried in a review about a quiet room. But an AI trained on purpose-built, accurate data about that museum's lighting levels, acoustic properties, specific rest facilities and sensory environment can answer in genuinely useful and trustworthy detail.

Source: Science Museum

Yet data is only part of the equation. How that data shapes the guest experience at each touchpoint is where the real test of AI's value begins.

Where AI Can Solve Real Problems

There is a tendency in tourism to frame AI as a risk to authentic experience, but this assumes the existing human experience is uniformly good. Nick highlighted how very bad AI experiences exist, but so do very bad human experiences. He used the analogy of a traveller trying to resolve a refund or rebook a trip through a customer service channel, reaching a human who lacks both the authority and the flexibility to solve the problem. As the traveller hits a wall, frustration builds. Sometimes the situation escalates. AI, handled well, can resolve exactly this kind of friction.

However, poorly designed AI customer service, which currently represents the majority of AI customer interactions, sends users to the same dead ends, just faster. Nick cited Amazon's approach as a telling example. For items under a certain value threshold, the company has calculated that it is cheaper to issue an automatic refund than to route the complaint through any resolution process.

The determining factor is knowing who your guest is and what they want. Some visitors want efficiency and frictionless logistics, while others seek completely unique experiences. Klaas made this point directly when outlining the application of robots in hotel receptions. While the robot was expected to perform noticeably worse than a human receptionist on guest satisfaction measures, when guests were offered a genuine choice between the two, the results were strikingly similar. Guests were equally happy and satisfied and the request was resolved just as often as with the human receptionist. The lesson is that satisfaction follows the quality of the interaction rather than the nature of the agent delivering it.

@hotelschoolthehaguenl Have you been followed by a robot? 🤖 Students following the Hospitality Research Lab course are uncovering the potential of robotics in hotel lobbies, observing reactions, and crafting innovative implementations with Temi!   #RoboticsInHospitality #HotelTech #InnovationInLobbies #FutureOfHospitality #TemiRobot #TechInHotels #HospitalityResearch #GuestExperience #TechTrends #HospitalityInnovation #Robots #foryoupage ♬ I am sneaking into you Pink Panther Parody - moshimo sound design

Claire described a further layer, which she termed cognitive engineering. This involves monitoring the rhythm and word choices of a guest's communication to detect frustration and trigger a handover to a human at the right moment. When it works, the customer converses with a human and feels genuinely heard. Such an approach uses AI as a support structure for human interaction rather than as a replacement.

Nick articulated the underlying principle clearly, emphasising that good outcomes will not simply emerge because AI is present. They require deliberate choices about when AI should act, when it should defer and how the transition between the two feels to the person on the other end. Getting the experience right depends, in turn, on making intentional decisions about which tools to use and why.

Choosing Tools for the Right Reasons

Katerina argued that the main question is to which problem AI is being applied, and whether that problem actually requires a human to solve in the first place. This problem-first approach opens up a much wider range of options. Five years ago, Katerina noted, launching a basic website cost around €25,000. Today, with the right combination of tools and a clear brief, the same can be done in days on a limited budget. The transformative opportunity for smaller tourism businesses and destinations is precisely this reduction in the cost of functions that previously required specialist expertise and significant budget.

Katerina's students demonstrated this vividly. A group of bachelor-level tourism students, studying marketing, used an AI tool to create a VR game as a marketing project for a castle. Without specific technical skills, they brought a clear idea of what they needed to produce. From this perspective, Nick added how everyone needs to ask themselves how to adapt the way they work and accept the inevitable conclusion that the nature of jobs is changing with AI's continued development.

The same logic applies to the selection of data and tools. Nick drew attention to the often-overlooked richness of existing datasets in communities such as Hugging Face, where specialist collections of labelled data are available for very specific purposes. A destination wanting to monitor changes in bird species as part of a sustainability initiative, for instance, could draw on existing birdsong recognition datasets to monitor if there has been a change over time. These specialist, pre-trained datasets make targeted capabilities accessible to organisations that could not otherwise develop them, and the sector would benefit from a broader conversation about where those datasets exist and how they can be applied.

Source: Hugging Face

Yet, the panel was in agreement that building a large language model (LLM) from scratch is not a realistic or sensible option for most tourism businesses. The costs of development and ongoing maintenance are prohibitive, while the engineering talent is scarce. Nevertheless, the conversation about AI tools is often narrowed to a false choice between building your own or using an off-the-shelf product.

Claire proposes that the real breakthrough lies in anchoring LLMs to your own data. By combining the language capability of an LLM with a private, quality-controlled dataset, you prevent it from drifting into bias or 'hallucination'. To truly double-lock this accuracy, a second LLM can act as an independent auditor, cross-checking every response against your data to ensure the final output is beyond reproach. By using the data that organisations own, this kind of configuration gives smaller operators real capability without requiring them to surrender control of their own knowledge base or commit to prohibitive development costs.

The choice of which model or platform to align with is also not purely a technical decision. Nick raised the recent example of Anthropic and OpenAI being approached by the US government for unrestricted access to their AI systems for military use. Anthropic declined, while OpenAI agreed. In the days that followed, Anthropic saw a 60% surge in new users, enough to strain its infrastructure. The demonstrable shift in user behaviour proves that trust and value systems will hold out over companies that approach decisions based on opportunism and a perceived lack of integrity.

What has become abundantly clear is that the values embedded in the AI tools an organisation adopts are not a separate consideration from how it presents itself to its guests and clients. Anthropic's decision to publish a clear charter of values and hold to them under external pressure earned visible public trust. The same principle applies to businesses and DMOs.

Communicating Responsibility Clearly

The panel kept returning to responsibility as the thread connecting everything else. Katerina framed it directly by emphasising that "when we're talking about transparency and trust, in reality, we are talking about who is responsible if something bad happens." Drawing a parallel to self-driving vehicles, Katerina explained how a clear liability model has been established in the US, with the car manufacturer responsible for the actions of AI in self-driving mode. However, the tourism sector does not yet have equivalent clarity. Who is responsible when an AI-generated image misrepresents a hotel? Who is accountable when a chatbot gives a visitor inaccurate information? As AI-generated and AI-edited content becomes increasingly difficult to distinguish from reality, the accountability question grows more pressing.

The comparison with image editing is also useful here. Modifying a photograph in Photoshop has always produced adjusted content and the practice has existed for decades. What is new is the scale, the speed and the ease with which AI enables content to be generated or fundamentally altered. Katerina noted that many people can no longer reliably distinguish between AI-generated and human-generated content.

While the above examples prove the essential need for AI content labelling to ensure a transparent process, the comparison with cookie consent notices paints a stark picture. Despite being designed to transfer legal responsibility for data use to the user, most people click through without reading. In effect, the preconception that explaining what cookies are and making the process transparent will make people happier was wrong. Learning from this, Katerina emphasises the truth that "people want the easiest, quickest way to do things." AI disclosure risks going the same way unless the industry designs something more meaningful and more honest.

The DTTT AI Transparency Framework offers one response, incorporating an easy-to-read graded model for communicating AI contribution that goes beyond a generic disclaimer and reflects the actual degree of human versus AI involvement in a given output. Setting the guardrails and degree of openness to which one uses AI now will shape the future direction of AI's development.

Five Futures for AI in Hospitality

Where responsibility ultimately lands will depend on which version of an AI-enabled future the sector ends up building. Offering a useful corrective to single-scenario thinking about where AI is taking the sector, Klaas presented findings from Hotelschool The Hague's annual outlook paper on AI in hospitality. The research produced five distinct scenarios for how AI might reshape the near and medium-term future of hospitality:

  1. AI Future Baseline: Guests and hotels both deploy agentic AI to manage bookings, preferences and logistics in real time. Service becomes seamless and highly personalised. The risk is that when every hotel can deliver this level of service, it ceases to differentiate. Guests become more demanding rather than more satisfied as the baseline rises.
  2. Platform Power Shift: Guests begin using AI to protect themselves from the data collection that hotels rely on for personalisation. Travellers arrive with AI agents that filter what companies can learn about them. Hotels sell rooms to guests whose profiles are effectively invisible, meaning that the commercial logic of knowing your customer becomes structurally harder to sustain.
  3. Customer AI Counterforce: Hotels use AI to enhance their presentation, with properties shown at their photogenic best and guest-facing content optimised. Guests, aware of this, use AI in turn to look through the surface and assess what a property is actually like. Transparency increases not by design but through competitive pressure between AI systems.
  4. Nightmare of Modern Times: AI is used to script and optimise employee behaviour, managing interactions with guests in real time to maximise satisfaction metrics and operational efficiency. While the technology works, the human experience of doing the job deteriorates. A recent example in this direction is Burger King's "BK Assistant" and the AI employee coach chatbot "Patty" being integrated into employee headsets to answer operational questions. However, in also compiling "friendliness scores" based on staff interactions with customers, an immediate debate has been opened about the role AI will play in workplace surveillance and where the limit of operational support should be determined.
  5. Worker Empowering Path. AI takes on what employees find repetitive, dull or physically taxing, freeing human capacity for the interactions that require genuine presence, judgment and care. Guests get the quality of attention, while employees undertake meaningful work. This is the scenario that points most clearly toward a sustainable model for the sector.

Source: Hotelschool The Hague

The eventual reality will sit somewhere between and across several of these scenarios, and the industry's trajectory is not fixed. Yet to be prepared for the future, it's important to ask which direction an organisation is currently heading and whether that direction was a conscious choice. The organisations that approach AI with that clarity, and the honesty to communicate it openly to their guests and partners, are the ones best placed to build trust with their customers. This is why setting the baseline for AI transparency now is so vital to ensuring that the tourism sector is best positioned to implement AI responsibly, ethically and effectively.

ITB Session Recap
Digital Tourism Think Tank
DTTT Framework
D
AI-Generated — 72% AI involvement
AI Transparency Model v1.1 · DTTT Framework
AI tools used
Claude
Specific models
Claude Sonnet 4.6
AI-supported tasks
DraftingEditing
AI contribution
Claude was responsible for converting a detailed brief into a structured, publication-ready article across two drafting rounds, synthesising panel discussion content, editorial rules and DTTT tone requirements into flowing prose. It also handled secondary tasks, including the addition of connecting sentences. The final article's structure, narrative arc and the majority of its sentences originated with AI.
Human contribution
The DTTT Knowledge Team authored the brief, which carried significant intellectual load: identifying the thematic priorities, selecting which panellist contributions to highlight, specifying examples and setting precise editorial rules. The Knowledge Team then directed two meaningful revision rounds and completed a substantive final edit, which refined phrasing, restructured several passages and made final judgements on tone throughout.
Productivity
4/5
High gain
Delivery Extension
1/5
Standard delivery
Combined AI Value
2/5
Moderate
Environmental Grade
A
Negligible impact
Scored using the DTTT AI Transparency Framework · Assessments are self-reported

The promise of leveraging AI in tourism is well documented. Less examined is whether that promise is being fulfilled in a way that guests, employees and destinations can actually trust. At ITB Berlin, a panel gathered to explore precisely this question through the lens of real operational decisions, emerging research and the concrete trade-offs that businesses are navigating right now.

The session, moderated by Dr Jessika Weber-Sabil, Professor of Digital Transformation for Cultural Tourism at Breda University of Applied Sciences, brought together Laurenz Schwarzhappel, Co-Founder and CEO of Global Living Apartments; Claire Robinson, Co-Founder and Co-CEO of Region Lovers; Katerina Volchek, Professor at Deggendorf Institute of Technology; Klaas Koerten, Senior Researcher in Robotics at Hotelschool The Hague; and Nick Hall, Co-Founder & CEO of the Digital Tourism Think Tank.

Their starting point was considering trust. Who holds it? What damages it? What does the tourism sector need to do differently to earn and keep it as AI becomes embedded in everyday operations? The tensions surfaced between efficiency and authenticity, transparency and convenience, human judgment and automated scale, are exactly the tensions that destination leaders will need to navigate in the years ahead. This piece draws on that conversation to identify where the most important decisions lie.

Shifting Expectations

One of the clearest signals from the discussion is that guest expectations around communication have already shifted faster than many operators realise. Laurenz described an operation where 90% of guest interactions are handled without any human involvement because guests are no longer requesting it for straightforward queries. Questions are resolved instantly and guests move on. The 10% who do seek human contact can reach someone at any hour. This approach demonstrates how the threshold for when human contact is necessary has changed considerably, enhancing the degree of flexibility within business operations.

At the same time, however, this creates a new kind of expectation management. When AI fails to provide a seamless experience, or when its presence is not made clear to guests, trust erodes quickly. Klaas' example of Airbnb's automatic translation illustrates just how fine this line can be. Having received messages from his Airbnb host in his native Dutch and feeling genuinely moved by the effort, on arrival, he discovered that the host spoke no Dutch at all and had no idea the platform was translating every exchange. A warm, apparently authentic connection flipped in an instant into something unsettling. The attribution of human intent to an automated process and the lack of transparency in the process harmed the nature of the interaction entirely.

While AI presents a strong opportunity to enhance visitor experiences, transparency is the mechanism that moderates this. If guests do not know they are speaking to a machine, they cannot make an informed choice about whether they want to. This is precisely the gap that the DTTT's newly launched AI Transparency Framework addresses. Unveiled at ITB Berlin, the framework gives a practical structure for disclosing how AI has been used in creating content, delivering services and generating outputs.

Source: DTTT AI Transparency Framework

At a moment when the industry is accelerating its AI adoption faster than its governance frameworks, having a clear model for communicating AI use is an operational prerequisite. As Nick put it, responsible AI use is something to talk about openly, not to obscure, because whoever is on the receiving end of that capability has the right to know where it came from and what it means for them. Understanding what transparency requires, though, first demands a clearer sense of what authenticity actually means in an AI-assisted operation.

The Meaning of Authenticity

The word "authenticity" circulates constantly in tourism, but it is often used loosely. Claire offered a sharper definition. "Authenticity doesn't mean human", she argued. "It means it's genuine and not copied". When the data that feeds a large language model: the hotel's own knowledge, the area's character and the organisation's values is genuine, the output is authentic to that organisation's voice regardless of the mechanism that delivers it.

This reframes the debate considerably. An AI-generated response that draws on a destination's own knowledge base, values and local insight can be more authentic to that destination's voice than a generic response. The delivery method changes, yet the integrity of the content does not have to.

The critical variable is data quality. Claire was emphatic: if you do not give the AI good data, it will not be able to give trustworthy answers. This is an argument for destinations and operators to invest seriously in the quality of their data rather than the novelty of their tools. A compelling illustration of what good data makes possible is accessibility trip planning. If a visitor planning a trip with a child on the autistic spectrum asks a generic AI about a museum, it might surface a mention of a designated entrance for people with reduced mobility and, if they are lucky, a comment buried in a review about a quiet room. But an AI trained on purpose-built, accurate data about that museum's lighting levels, acoustic properties, specific rest facilities and sensory environment can answer in genuinely useful and trustworthy detail.

Source: Science Museum

Yet data is only part of the equation. How that data shapes the guest experience at each touchpoint is where the real test of AI's value begins.

Where AI Can Solve Real Problems

There is a tendency in tourism to frame AI as a risk to authentic experience, but this assumes the existing human experience is uniformly good. Nick highlighted how very bad AI experiences exist, but so do very bad human experiences. He used the analogy of a traveller trying to resolve a refund or rebook a trip through a customer service channel, reaching a human who lacks both the authority and the flexibility to solve the problem. As the traveller hits a wall, frustration builds. Sometimes the situation escalates. AI, handled well, can resolve exactly this kind of friction.

However, poorly designed AI customer service, which currently represents the majority of AI customer interactions, sends users to the same dead ends, just faster. Nick cited Amazon's approach as a telling example. For items under a certain value threshold, the company has calculated that it is cheaper to issue an automatic refund than to route the complaint through any resolution process.

The determining factor is knowing who your guest is and what they want. Some visitors want efficiency and frictionless logistics, while others seek completely unique experiences. Klaas made this point directly when outlining the application of robots in hotel receptions. While the robot was expected to perform noticeably worse than a human receptionist on guest satisfaction measures, when guests were offered a genuine choice between the two, the results were strikingly similar. Guests were equally happy and satisfied and the request was resolved just as often as with the human receptionist. The lesson is that satisfaction follows the quality of the interaction rather than the nature of the agent delivering it.

@hotelschoolthehaguenl Have you been followed by a robot? 🤖 Students following the Hospitality Research Lab course are uncovering the potential of robotics in hotel lobbies, observing reactions, and crafting innovative implementations with Temi!   #RoboticsInHospitality #HotelTech #InnovationInLobbies #FutureOfHospitality #TemiRobot #TechInHotels #HospitalityResearch #GuestExperience #TechTrends #HospitalityInnovation #Robots #foryoupage ♬ I am sneaking into you Pink Panther Parody - moshimo sound design

Claire described a further layer, which she termed cognitive engineering. This involves monitoring the rhythm and word choices of a guest's communication to detect frustration and trigger a handover to a human at the right moment. When it works, the customer converses with a human and feels genuinely heard. Such an approach uses AI as a support structure for human interaction rather than as a replacement.

Nick articulated the underlying principle clearly, emphasising that good outcomes will not simply emerge because AI is present. They require deliberate choices about when AI should act, when it should defer and how the transition between the two feels to the person on the other end. Getting the experience right depends, in turn, on making intentional decisions about which tools to use and why.

Choosing Tools for the Right Reasons

Katerina argued that the main question is to which problem AI is being applied, and whether that problem actually requires a human to solve in the first place. This problem-first approach opens up a much wider range of options. Five years ago, Katerina noted, launching a basic website cost around €25,000. Today, with the right combination of tools and a clear brief, the same can be done in days on a limited budget. The transformative opportunity for smaller tourism businesses and destinations is precisely this reduction in the cost of functions that previously required specialist expertise and significant budget.

Katerina's students demonstrated this vividly. A group of bachelor-level tourism students, studying marketing, used an AI tool to create a VR game as a marketing project for a castle. Without specific technical skills, they brought a clear idea of what they needed to produce. From this perspective, Nick added how everyone needs to ask themselves how to adapt the way they work and accept the inevitable conclusion that the nature of jobs is changing with AI's continued development.

The same logic applies to the selection of data and tools. Nick drew attention to the often-overlooked richness of existing datasets in communities such as Hugging Face, where specialist collections of labelled data are available for very specific purposes. A destination wanting to monitor changes in bird species as part of a sustainability initiative, for instance, could draw on existing birdsong recognition datasets to monitor if there has been a change over time. These specialist, pre-trained datasets make targeted capabilities accessible to organisations that could not otherwise develop them, and the sector would benefit from a broader conversation about where those datasets exist and how they can be applied.

Source: Hugging Face

Yet, the panel was in agreement that building a large language model (LLM) from scratch is not a realistic or sensible option for most tourism businesses. The costs of development and ongoing maintenance are prohibitive, while the engineering talent is scarce. Nevertheless, the conversation about AI tools is often narrowed to a false choice between building your own or using an off-the-shelf product.

Claire proposes that the real breakthrough lies in anchoring LLMs to your own data. By combining the language capability of an LLM with a private, quality-controlled dataset, you prevent it from drifting into bias or 'hallucination'. To truly double-lock this accuracy, a second LLM can act as an independent auditor, cross-checking every response against your data to ensure the final output is beyond reproach. By using the data that organisations own, this kind of configuration gives smaller operators real capability without requiring them to surrender control of their own knowledge base or commit to prohibitive development costs.

The choice of which model or platform to align with is also not purely a technical decision. Nick raised the recent example of Anthropic and OpenAI being approached by the US government for unrestricted access to their AI systems for military use. Anthropic declined, while OpenAI agreed. In the days that followed, Anthropic saw a 60% surge in new users, enough to strain its infrastructure. The demonstrable shift in user behaviour proves that trust and value systems will hold out over companies that approach decisions based on opportunism and a perceived lack of integrity.

What has become abundantly clear is that the values embedded in the AI tools an organisation adopts are not a separate consideration from how it presents itself to its guests and clients. Anthropic's decision to publish a clear charter of values and hold to them under external pressure earned visible public trust. The same principle applies to businesses and DMOs.

Communicating Responsibility Clearly

The panel kept returning to responsibility as the thread connecting everything else. Katerina framed it directly by emphasising that "when we're talking about transparency and trust, in reality, we are talking about who is responsible if something bad happens." Drawing a parallel to self-driving vehicles, Katerina explained how a clear liability model has been established in the US, with the car manufacturer responsible for the actions of AI in self-driving mode. However, the tourism sector does not yet have equivalent clarity. Who is responsible when an AI-generated image misrepresents a hotel? Who is accountable when a chatbot gives a visitor inaccurate information? As AI-generated and AI-edited content becomes increasingly difficult to distinguish from reality, the accountability question grows more pressing.

The comparison with image editing is also useful here. Modifying a photograph in Photoshop has always produced adjusted content and the practice has existed for decades. What is new is the scale, the speed and the ease with which AI enables content to be generated or fundamentally altered. Katerina noted that many people can no longer reliably distinguish between AI-generated and human-generated content.

While the above examples prove the essential need for AI content labelling to ensure a transparent process, the comparison with cookie consent notices paints a stark picture. Despite being designed to transfer legal responsibility for data use to the user, most people click through without reading. In effect, the preconception that explaining what cookies are and making the process transparent will make people happier was wrong. Learning from this, Katerina emphasises the truth that "people want the easiest, quickest way to do things." AI disclosure risks going the same way unless the industry designs something more meaningful and more honest.

The DTTT AI Transparency Framework offers one response, incorporating an easy-to-read graded model for communicating AI contribution that goes beyond a generic disclaimer and reflects the actual degree of human versus AI involvement in a given output. Setting the guardrails and degree of openness to which one uses AI now will shape the future direction of AI's development.

Five Futures for AI in Hospitality

Where responsibility ultimately lands will depend on which version of an AI-enabled future the sector ends up building. Offering a useful corrective to single-scenario thinking about where AI is taking the sector, Klaas presented findings from Hotelschool The Hague's annual outlook paper on AI in hospitality. The research produced five distinct scenarios for how AI might reshape the near and medium-term future of hospitality:

  1. AI Future Baseline: Guests and hotels both deploy agentic AI to manage bookings, preferences and logistics in real time. Service becomes seamless and highly personalised. The risk is that when every hotel can deliver this level of service, it ceases to differentiate. Guests become more demanding rather than more satisfied as the baseline rises.
  2. Platform Power Shift: Guests begin using AI to protect themselves from the data collection that hotels rely on for personalisation. Travellers arrive with AI agents that filter what companies can learn about them. Hotels sell rooms to guests whose profiles are effectively invisible, meaning that the commercial logic of knowing your customer becomes structurally harder to sustain.
  3. Customer AI Counterforce: Hotels use AI to enhance their presentation, with properties shown at their photogenic best and guest-facing content optimised. Guests, aware of this, use AI in turn to look through the surface and assess what a property is actually like. Transparency increases not by design but through competitive pressure between AI systems.
  4. Nightmare of Modern Times: AI is used to script and optimise employee behaviour, managing interactions with guests in real time to maximise satisfaction metrics and operational efficiency. While the technology works, the human experience of doing the job deteriorates. A recent example in this direction is Burger King's "BK Assistant" and the AI employee coach chatbot "Patty" being integrated into employee headsets to answer operational questions. However, in also compiling "friendliness scores" based on staff interactions with customers, an immediate debate has been opened about the role AI will play in workplace surveillance and where the limit of operational support should be determined.
  5. Worker Empowering Path. AI takes on what employees find repetitive, dull or physically taxing, freeing human capacity for the interactions that require genuine presence, judgment and care. Guests get the quality of attention, while employees undertake meaningful work. This is the scenario that points most clearly toward a sustainable model for the sector.

Source: Hotelschool The Hague

The eventual reality will sit somewhere between and across several of these scenarios, and the industry's trajectory is not fixed. Yet to be prepared for the future, it's important to ask which direction an organisation is currently heading and whether that direction was a conscious choice. The organisations that approach AI with that clarity, and the honesty to communicate it openly to their guests and partners, are the ones best placed to build trust with their customers. This is why setting the baseline for AI transparency now is so vital to ensuring that the tourism sector is best positioned to implement AI responsibly, ethically and effectively.

ITB Session Recap
Digital Tourism Think Tank
DTTT Framework
D
AI-Generated — 72% AI involvement
AI Transparency Model v1.1 · DTTT Framework
AI tools used
Claude
Specific models
Claude Sonnet 4.6
AI-supported tasks
DraftingEditing
AI contribution
Claude was responsible for converting a detailed brief into a structured, publication-ready article across two drafting rounds, synthesising panel discussion content, editorial rules and DTTT tone requirements into flowing prose. It also handled secondary tasks, including the addition of connecting sentences. The final article's structure, narrative arc and the majority of its sentences originated with AI.
Human contribution
The DTTT Knowledge Team authored the brief, which carried significant intellectual load: identifying the thematic priorities, selecting which panellist contributions to highlight, specifying examples and setting precise editorial rules. The Knowledge Team then directed two meaningful revision rounds and completed a substantive final edit, which refined phrasing, restructured several passages and made final judgements on tone throughout.
Productivity
4/5
High gain
Delivery Extension
1/5
Standard delivery
Combined AI Value
2/5
Moderate
Environmental Grade
A
Negligible impact
Scored using the DTTT AI Transparency Framework · Assessments are self-reported

Subscribe to our Newsletter

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
```