At ITB Berlin, we introduced the first iteration of our AI Transparency Framework, a practical tool designed to help organisations disclose AI use clearly, strengthen accountability and communicate honestly with partners and audiences.
AI has changed what organisations produce, how quickly they produce it and where human judgement sits relative to machine output. When those changes go unreported, trust is at risk. With the EU AI Act coming into force in August 2026, its transparency requirements for general-purpose AI systems will, for the first time, establish a regulatory baseline for organisations operating in Europe. Yet the case for AI transparency in tourism does not rest on regulation. Instead, the growing difficulty of accurately knowing where human work ends and AI-generated content begins means a new framework has become essential.
For most organisations, AI is now deeply embedded in professional workflows to the extent that the question of attribution has become genuinely complex. That widespread presence creates a real problem for anyone trying to account for it honestly when evaluating their workflows. Ask a content team whether they used AI for a recent report and you will get a complicated answer. The output reflects a collaboration whose proportions are rarely examined and almost never disclosed.
That ambiguity is the starting point for the DTTT AI Transparency Framework, first announced at ITB Berlin. The purpose of this framework is to provide a clear and streamlined mechanism for organisations to be open with their partners and visitors about how they use AI. Developed as a professional tool for ensuring AI accountability and evaluating the real efficiencies of automation, the framework gives organisations a structured, honest way to record and communicate AI involvement in every task and project.
Research by Anthropic emphasises that theoretical AI coverage is substantially higher than its current state of development. This gap between capability and current deployment demonstrates why setting disclosure standards now has become a necessity to plot the journey forward towards what can be defined as ethical AI use and a common standard to disclose this.

The framework is open and versioned, published under a Creative Commons Attribution licence, comprising four independently versioned models — the AI Transparency Model (v1.1), the Productivity and Delivery Extension Model (v1.2), the AI Environmental Impact Model (v0.6) and the AI Content Integrity Model (v0.1) — each designed to evolve on its own release cycle as industry governance continues to strengthen as AI capabilities and disclosure standards develop. A governing committee of industry representatives will shape how each model develops, with working groups for each model open to join. That collective input is central to the framework's lasting credibility.
Tourism is built on the promise of authentic experiences. Visitors research destinations, read itineraries, absorb recommendations and make decisions based on content they assume reflects genuine knowledge and human editorial judgement. When AI generates that content without disclosure, it raises an immediate practical concern about accuracy. An AI-generated itinerary may be plausible without being accurate. It may describe a walk between two points as taking 20 minutes when the real journey takes an hour, recommend a restaurant that has closed or reference a festival scheduled for a different week. Without disclosure, readers have no way to know that verification matters more than usual.
Undisclosed AI content in this context amounts to misrepresentation. A clear signal that AI contributed to a piece tells the reader to weigh it accordingly, and that is a meaningful act of professional honesty. Disclosed AI content gives readers the information they need to engage on appropriate terms. Without that signal, they have no basis for selecting how trustworthy they perceive information to be and the degree of verification required to substantiate what they have read.
As a people-centric industry, tourism has a distinctive responsibility when it comes to balancing AI use with authenticity. It promotes places, cultures and people to global audiences, shaping the decisions of millions of travellers and the livelihoods of the communities those travellers visit. This is why the sector’s voice matters so much, with the integrity and standards by which the sector communicates and the accuracy of what it publishes building a foundation of authentic, honest representation for destinations. Having such an important influence makes the question of AI transparency in tourism something more fundamental than a content ethics issue.
This applies with equal force to professional relationships with industry partners and external agencies. Knowing that a senior consultant developed an analysis over three days of original research is different from knowing that a team member prompted an AI tool and reviewed the output in an afternoon. Both are legitimate ways of working, but neither should be invisible.
This reflects a growing consensus in AI ethics governance. UNESCO's Recommendation on the Ethics of Artificial Intelligence establishes transparency and explainability as core principles, highlighting that people should be able to understand when and how AI has shaped the information they receive. Disclosed AI content, even imperfect content, respects that right. The DTTT's AI Transparency Framework applies that principle at the level of professional deliverables, where the stakes are just as real as in conversational contexts.
The difficulty of distinguishing AI-generated content from human-produced content is growing rapidly. Research by Runway found that fewer than 10% of people could tell the difference between real and AI-generated videos showing the same frame. As the perceptual gap closes, the burden on disclosure increases. Audiences cannot make informed judgements about content they cannot distinguish.
A values-based approach to AI adoption means asking whether the way AI is being used reflects the standards the industry wants to set, alongside whether it improves efficiency. With professional relationships in tourism depending on honest communication about how work is produced, building trust through transparency is the practical expression of that commitment. At the same time, as people grow more sceptical about the true origins of the information they read, AI transparency helps to protect organisational reputation. Undisclosed AI use discovered after the fact is considerably more damaging than AI use communicated clearly at the outset.
The DTTT AI Transparency Framework provides a shared language, with a structured assessment process and disclosure format that any organisation can adopt without specialist technical knowledge. The inaugural release comprises four independently versioned but interconnected models:
The four models converge in the Transparency Report Card, acting as a structured disclosure document produced at the end of any AI-assisted engagement and shared alongside the deliverable itself. This is the mechanism that makes transparency visible and consistent, giving a clear account of what AI contributed and its impact.
The Transparency Report Card captures the scores from each model, every AI model used in the work, the specific tasks AI contributed to (drawn from a standard list including research, drafting, data analysis, editing, structure and code generation), a brief summary of AI and human contributions respectively and a short transparency note for inclusion in the deliverable footer. By using a standard prompt at the end of every project, teams can produce consistent Transparency Report Cards, making it easily integrated into organisational culture and a routine addition to workflows.
Created as an industry-wide resource, the completed assessment generates an embeddable iframe HTML disclosure in four formats:
The AI Transparency Card Toolkit, published alongside the framework, makes the self-assessment process accessible to any team member. It covers all four models in a single graded flow, and users can choose which models to apply. Acknowledging that while the framework applies to any AI-assisted work, the right combination of models and the appropriate disclosure format varies by context. With this in mind, the "how to apply the framework" guide covers six common scenarios, providing suggestions of the applicable models, the recommended workflow and the disclosure format that fits the output.
An AI Ethics Consideration Tool has also been developed as a pre-use step to help teams think through the implications of an AI task before they begin. Five questions surface the ethical, legal and data considerations relevant to a specific AI task and produce a routing recommendation and a tailored list of considerations to work through before proceeding with AI use.
Standards for responsible AI use in tourism are inevitable. For DMOs, the logical step is to apply the same rigour to AI as they do to their reputation for accuracy. By insisting that agencies and partners follow suit, they can maintain their authority even as the digital world becomes increasingly AI-driven. This requires a strong commitment to build upon intuitive reporting structures. By taking real workflows into account, the DTTT AI Transparency Framework makes the case for the industry to lead the process.
The DTTT AI Transparency Framework is published under a Creative Commons Attribution 4.0 licence. Framework explanations and the AI Transparency Toolkit are available at https://ai.thinkdigital.travel/.
AI has changed what organisations produce, how quickly they produce it and where human judgement sits relative to machine output. When those changes go unreported, trust is at risk. With the EU AI Act coming into force in August 2026, its transparency requirements for general-purpose AI systems will, for the first time, establish a regulatory baseline for organisations operating in Europe. Yet the case for AI transparency in tourism does not rest on regulation. Instead, the growing difficulty of accurately knowing where human work ends and AI-generated content begins means a new framework has become essential.
For most organisations, AI is now deeply embedded in professional workflows to the extent that the question of attribution has become genuinely complex. That widespread presence creates a real problem for anyone trying to account for it honestly when evaluating their workflows. Ask a content team whether they used AI for a recent report and you will get a complicated answer. The output reflects a collaboration whose proportions are rarely examined and almost never disclosed.
That ambiguity is the starting point for the DTTT AI Transparency Framework, first announced at ITB Berlin. The purpose of this framework is to provide a clear and streamlined mechanism for organisations to be open with their partners and visitors about how they use AI. Developed as a professional tool for ensuring AI accountability and evaluating the real efficiencies of automation, the framework gives organisations a structured, honest way to record and communicate AI involvement in every task and project.
Research by Anthropic emphasises that theoretical AI coverage is substantially higher than its current state of development. This gap between capability and current deployment demonstrates why setting disclosure standards now has become a necessity to plot the journey forward towards what can be defined as ethical AI use and a common standard to disclose this.

The framework is open and versioned, published under a Creative Commons Attribution licence, comprising four independently versioned models — the AI Transparency Model (v1.1), the Productivity and Delivery Extension Model (v1.2), the AI Environmental Impact Model (v0.6) and the AI Content Integrity Model (v0.1) — each designed to evolve on its own release cycle as industry governance continues to strengthen as AI capabilities and disclosure standards develop. A governing committee of industry representatives will shape how each model develops, with working groups for each model open to join. That collective input is central to the framework's lasting credibility.
Tourism is built on the promise of authentic experiences. Visitors research destinations, read itineraries, absorb recommendations and make decisions based on content they assume reflects genuine knowledge and human editorial judgement. When AI generates that content without disclosure, it raises an immediate practical concern about accuracy. An AI-generated itinerary may be plausible without being accurate. It may describe a walk between two points as taking 20 minutes when the real journey takes an hour, recommend a restaurant that has closed or reference a festival scheduled for a different week. Without disclosure, readers have no way to know that verification matters more than usual.
Undisclosed AI content in this context amounts to misrepresentation. A clear signal that AI contributed to a piece tells the reader to weigh it accordingly, and that is a meaningful act of professional honesty. Disclosed AI content gives readers the information they need to engage on appropriate terms. Without that signal, they have no basis for selecting how trustworthy they perceive information to be and the degree of verification required to substantiate what they have read.
As a people-centric industry, tourism has a distinctive responsibility when it comes to balancing AI use with authenticity. It promotes places, cultures and people to global audiences, shaping the decisions of millions of travellers and the livelihoods of the communities those travellers visit. This is why the sector’s voice matters so much, with the integrity and standards by which the sector communicates and the accuracy of what it publishes building a foundation of authentic, honest representation for destinations. Having such an important influence makes the question of AI transparency in tourism something more fundamental than a content ethics issue.
This applies with equal force to professional relationships with industry partners and external agencies. Knowing that a senior consultant developed an analysis over three days of original research is different from knowing that a team member prompted an AI tool and reviewed the output in an afternoon. Both are legitimate ways of working, but neither should be invisible.
This reflects a growing consensus in AI ethics governance. UNESCO's Recommendation on the Ethics of Artificial Intelligence establishes transparency and explainability as core principles, highlighting that people should be able to understand when and how AI has shaped the information they receive. Disclosed AI content, even imperfect content, respects that right. The DTTT's AI Transparency Framework applies that principle at the level of professional deliverables, where the stakes are just as real as in conversational contexts.
The difficulty of distinguishing AI-generated content from human-produced content is growing rapidly. Research by Runway found that fewer than 10% of people could tell the difference between real and AI-generated videos showing the same frame. As the perceptual gap closes, the burden on disclosure increases. Audiences cannot make informed judgements about content they cannot distinguish.
A values-based approach to AI adoption means asking whether the way AI is being used reflects the standards the industry wants to set, alongside whether it improves efficiency. With professional relationships in tourism depending on honest communication about how work is produced, building trust through transparency is the practical expression of that commitment. At the same time, as people grow more sceptical about the true origins of the information they read, AI transparency helps to protect organisational reputation. Undisclosed AI use discovered after the fact is considerably more damaging than AI use communicated clearly at the outset.
The DTTT AI Transparency Framework provides a shared language, with a structured assessment process and disclosure format that any organisation can adopt without specialist technical knowledge. The inaugural release comprises four independently versioned but interconnected models:
The four models converge in the Transparency Report Card, acting as a structured disclosure document produced at the end of any AI-assisted engagement and shared alongside the deliverable itself. This is the mechanism that makes transparency visible and consistent, giving a clear account of what AI contributed and its impact.
The Transparency Report Card captures the scores from each model, every AI model used in the work, the specific tasks AI contributed to (drawn from a standard list including research, drafting, data analysis, editing, structure and code generation), a brief summary of AI and human contributions respectively and a short transparency note for inclusion in the deliverable footer. By using a standard prompt at the end of every project, teams can produce consistent Transparency Report Cards, making it easily integrated into organisational culture and a routine addition to workflows.
Created as an industry-wide resource, the completed assessment generates an embeddable iframe HTML disclosure in four formats:
The AI Transparency Card Toolkit, published alongside the framework, makes the self-assessment process accessible to any team member. It covers all four models in a single graded flow, and users can choose which models to apply. Acknowledging that while the framework applies to any AI-assisted work, the right combination of models and the appropriate disclosure format varies by context. With this in mind, the "how to apply the framework" guide covers six common scenarios, providing suggestions of the applicable models, the recommended workflow and the disclosure format that fits the output.
An AI Ethics Consideration Tool has also been developed as a pre-use step to help teams think through the implications of an AI task before they begin. Five questions surface the ethical, legal and data considerations relevant to a specific AI task and produce a routing recommendation and a tailored list of considerations to work through before proceeding with AI use.
Standards for responsible AI use in tourism are inevitable. For DMOs, the logical step is to apply the same rigour to AI as they do to their reputation for accuracy. By insisting that agencies and partners follow suit, they can maintain their authority even as the digital world becomes increasingly AI-driven. This requires a strong commitment to build upon intuitive reporting structures. By taking real workflows into account, the DTTT AI Transparency Framework makes the case for the industry to lead the process.
The DTTT AI Transparency Framework is published under a Creative Commons Attribution 4.0 licence. Framework explanations and the AI Transparency Toolkit are available at https://ai.thinkdigital.travel/.