The DTTT AI Transparency Framework in Detail

The rationale for AI transparency in tourism is well established. In the previous article, we announced the launch of our DTTT AI Transparency Framework, which is intended as a bottom-up approach to AI transparency that is manageable for businesses.

The rationale for AI transparency in tourism is well established. In the previous article, we announced the launch of our DTTT AI Transparency Framework, which is intended as a bottom-up approach to AI transparency that is manageable for businesses of all sizes. This article explores the inputs into each of the four independently versioned models that comprise the framework. They can be applied individually or used together as a complete disclosure approach. What follows is a detailed account of each model, how it works and what it requires of the organisations that adopt it.

Source: DTTT AI Transparency Framework

Transparency Model

Much of the conversation about AI transparency in tourism focuses on the visitor-facing dimension. That focus is understandable, but it captures only part of the picture. Equally significant is the question of transparency within the industry's own commercial relationships. DMOs commission agencies to produce work that informs major decisions and have a reasonable expectation of knowing where human expertise ends and AI-generated content begins.

This distinction is relevant to how the work should be interpreted, verified and built on. A research report in which AI compiled and synthesised secondary sources under human editorial direction is a different kind of deliverable from one researched and drafted entirely by a subject matter expert. Both can be high-quality and have legitimate places in a working relationship. What matters is knowing which one has been received. A shared measurement framework makes these conversations easier to have. It gives both parties a common vocabulary and removes the awkwardness of asking whether AI was used and, if so, how much.

For DMOs, in particular, the high degree of public accountability in their work means that understanding the role AI played in producing it is a central aspect of responsible governance. This is where the Transparency Model provides a clear basis for setting expectations and monitoring professional standards.

The Transparency Model assigns any deliverable a single letter grade from A to E, accompanied by a percentage figure indicating the level of AI contribution on a scale from 1 to 99%. This is accompanied by two declared extremes — "Human Created" (0%) and "Fully AI" (100%) are direct declarations recorded on the Report Card without a letter grade, acting as a distinct mechanism for the two absolute ends of the spectrum where no percentage nuance is needed.

Source: DTTT AI Transparency Framework

Grades A and E sit at the narrow extremes of the scale, marking a more 'one-sided' way of working. The middle ground of B, C and D reflects a sliding spectrum in the balance between humans and AI in producing outputs. The grade communicates how the work was produced, meaning that the people receiving it can engage with it on that basis. A grade C deliverable produced by a skilled team with strong direction may be more rigorous and valuable than a grade A deliverable produced quickly.

Productivity and Delivery Extension Model

AI's most significant professional impact is often invisible in the final output. A research report may look identical whether it took two days or eight hours to produce. The Productivity and Delivery Extension Model clearly identifies the difference. For teams operating under pressure, documenting that AI has freed significant hours creates the conditions for a healthier work environment. When AI handles routine work, employees have more capacity for tasks that require genuine judgement, contextual knowledge and human relationships. Those are precisely the tasks that AI should not be relied upon to manage.

The model measures two distinct dimensions of AI value, each scored from 1 to 5:

  1. The AI Productivity Score captures time efficiency
  2. The Delivery Extension Score captures how far AI extended what a team could actually produce.

Source: DTTT AI Transparency Framework

The two scores combine through a lookup matrix to produce a Combined AI Value score from 1 (Limited) to 5 (Exceptional). The matrix rewards strong performance across both dimensions: a combined score of 4 or 5 requires both individual scores to be high. Recognising the known ceiling effect, whereby combinations of 5/4, 4/5 and 5/5 all produce a combined score of 5, the detail is preserved by both individual scores remaining visible alongside the combined score.

For organisations that want to translate productivity scores into concrete business terms, a Productivity Gain Calculator is available alongside the framework. It takes a task's estimated duration without AI, the day rate or salary equivalent and the Productivity Score, and returns estimated hours saved, cost equivalent and an annualised saving figure. Results are presented as transparent estimates, encouraging organisations to record the assumptions behind any aggregate productivity claim.

Environmental Impact Model

Sustainability reporting is an established expectation for DMOs, yet the energy cost of AI use sits almost entirely outside current reporting frameworks. The Environmental Impact Model gives DMOs and their partners a practical starting point for factoring AI usage into sustainability reporting, without waiting for industry-wide data standards that remain some years from maturity. The Environmental Impact Model assigns an A to E grade based on three factors: task type, model category and usage intensity.

Task type is the most significant driver of footprint. Text-based tasks (writing, research, editing and analysis) carry the lowest energy cost. According to research published by Epoch AI, a standard GPT-4o text query consumes approximately 0.3 watt-hours, a figure consistent with Google's own disclosure of 0.24 watt-hours median per Gemini text query. However, estimates suggest that the energy intensity required for image generation is roughly 60 times more energy-intensive per output than a text query, with video generation around 2,000 times the energy cost of a standard text query. With current industry environmental impact disclosures using incompatible methodologies, it is nearly impossible to specifically account for the energy intensity of different AI providers. This is why the DTTT model adopts a band-based approach rather than absolute figures.

On-device or lightweight tools carry the lowest footprint. Standard cloud assistants, such as Claude, ChatGPT and Gemini, sit in the middle range for everyday tasks. Frontier or large models, including general assistants operating in their most capable mode, carry a meaningfully higher footprint per query. Usage intensity adjusts the base grade one step in either direction. Video generation is an exception. Given the significant energy cost of AI video production, the video generation grade does not decrease for light use.

Source: DTTT AI Transparency Framework

In practical terms, most day-to-day work on standard text tasks sits at grade A (Low intensity) or B (Low-moderate intensity), well within the range of normal professional digital activity. Nevertheless, it is important to remember that the Environmental Impact Model remains an indicative tool at this stage. Grade boundaries will be reviewed regularly and tightened as first-party disclosure data from AI providers improves under EU AI Act requirements.

Content Integrity Model

The Content Integrity Model is intended to address whether AI use is ethically sound. As DMOs increasingly commission AI-generated imagery, animated photography and synthetic personas for destination marketing, a distinct set of questions arises around consent, authenticity and what audiences are told. While a transparency grade confirms that AI was involved, it doesn't clarify whether the people depicted consented to their image being used in that way or if travellers have any means of knowing it is not a real person. The Content Integrity Model was developed to address exactly that gap.

The model operates as a three-axis risk classifier, assessing what AI actually did to the content, what permissions were sought from any real individuals depicted and what disclosure was given to the audience. From this, both an 'integrity classification' and a 'machine-readable disclosure code' are produced, structured in the same way that Creative Commons licences standardise rights and usage terms as a short, consistent label that travels with the asset and communicates its conditions at a glance. Just as a Creative Commons attribution code embedded in file metadata tells anyone who receives the asset exactly how it may be used, a DTTT disclosure code in the format AI-[type] · CS-[consent] · DC-[disclosure] tells anyone who receives the content exactly how it was produced and on what ethical basis.

The implications for DMOs producing or commissioning any piece of AI-generated or AI-manipulated content are substantive. Every creative choice involving AI has an integrity dimension that sits alongside the editorial and aesthetic one. Choosing to animate real photography of destination visitors, to deploy a synthetic persona in social content or to enhance imagery using AI tools carries obligations around consent that standard photography releases do not cover, and around disclosure that caption-level acknowledgement may or may not satisfy depending on what the content shows. The Content Integrity Model gives marketing teams a structured way to assess those obligations before publication and keep a clear record of the decisions made.

The four Integrity classifications, Clear, Caution, High Risk and Not Recommended, act as a risk assessment. Instead of measuring the degree of AI use, the model assesses whether the ethical conditions for responsible publication have been met. A piece of content carrying a Not Recommended classification has an ethical problem that disclosure alone cannot fix. If the people depicted did not consent to AI manipulation of their image, telling the audience the content was made with AI does not resolve that failure. Consent and disclosure are both necessary, but they are different obligations. The DTTT AI Transparency Framework treats them as such.

Source: DTTT AI Transparency Framework

This framework is a first iteration. It is designed to evolve through the collective input of the destinations and businesses that use it in practice. X. Design Week in Brussels this June will bring together destination professionals to work on the framework's development and identify where future iterations should focus. The framework is designed to generate sector-level evidence over time.

For organisations that want a sustained role in shaping the framework, the DTTT AI Committee will be convened at XDW, offering ongoing governance participation. Members contribute to the review of grade boundaries, the regular enhancement of the four models and the broader standards agenda for AI transparency in tourism. A Research Partnership Programme, open to independent academic institutions, will support model validation and productivity impact studies using the structured dataset that consistent framework adoption creates.

The framework belongs to the industry and is designed to evolve through collective adoption. Working groups for each model, a cross-model governance committee and a case study programme are all open for organisations to join.

The DTTT AI Transparency Framework is published under a Creative Commons Attribution 4.0 licence. Framework explanations and the AI Transparency Card are available at https://ai.thinkdigital.travel.

Subscribe to our Newsletter

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

The rationale for AI transparency in tourism is well established. In the previous article, we announced the launch of our DTTT AI Transparency Framework, which is intended as a bottom-up approach to AI transparency that is manageable for businesses of all sizes. This article explores the inputs into each of the four independently versioned models that comprise the framework. They can be applied individually or used together as a complete disclosure approach. What follows is a detailed account of each model, how it works and what it requires of the organisations that adopt it.

Source: DTTT AI Transparency Framework

Transparency Model

Much of the conversation about AI transparency in tourism focuses on the visitor-facing dimension. That focus is understandable, but it captures only part of the picture. Equally significant is the question of transparency within the industry's own commercial relationships. DMOs commission agencies to produce work that informs major decisions and have a reasonable expectation of knowing where human expertise ends and AI-generated content begins.

This distinction is relevant to how the work should be interpreted, verified and built on. A research report in which AI compiled and synthesised secondary sources under human editorial direction is a different kind of deliverable from one researched and drafted entirely by a subject matter expert. Both can be high-quality and have legitimate places in a working relationship. What matters is knowing which one has been received. A shared measurement framework makes these conversations easier to have. It gives both parties a common vocabulary and removes the awkwardness of asking whether AI was used and, if so, how much.

For DMOs, in particular, the high degree of public accountability in their work means that understanding the role AI played in producing it is a central aspect of responsible governance. This is where the Transparency Model provides a clear basis for setting expectations and monitoring professional standards.

The Transparency Model assigns any deliverable a single letter grade from A to E, accompanied by a percentage figure indicating the level of AI contribution on a scale from 1 to 99%. This is accompanied by two declared extremes — "Human Created" (0%) and "Fully AI" (100%) are direct declarations recorded on the Report Card without a letter grade, acting as a distinct mechanism for the two absolute ends of the spectrum where no percentage nuance is needed.

Source: DTTT AI Transparency Framework

Grades A and E sit at the narrow extremes of the scale, marking a more 'one-sided' way of working. The middle ground of B, C and D reflects a sliding spectrum in the balance between humans and AI in producing outputs. The grade communicates how the work was produced, meaning that the people receiving it can engage with it on that basis. A grade C deliverable produced by a skilled team with strong direction may be more rigorous and valuable than a grade A deliverable produced quickly.

Productivity and Delivery Extension Model

AI's most significant professional impact is often invisible in the final output. A research report may look identical whether it took two days or eight hours to produce. The Productivity and Delivery Extension Model clearly identifies the difference. For teams operating under pressure, documenting that AI has freed significant hours creates the conditions for a healthier work environment. When AI handles routine work, employees have more capacity for tasks that require genuine judgement, contextual knowledge and human relationships. Those are precisely the tasks that AI should not be relied upon to manage.

The model measures two distinct dimensions of AI value, each scored from 1 to 5:

  1. The AI Productivity Score captures time efficiency
  2. The Delivery Extension Score captures how far AI extended what a team could actually produce.

Source: DTTT AI Transparency Framework

The two scores combine through a lookup matrix to produce a Combined AI Value score from 1 (Limited) to 5 (Exceptional). The matrix rewards strong performance across both dimensions: a combined score of 4 or 5 requires both individual scores to be high. Recognising the known ceiling effect, whereby combinations of 5/4, 4/5 and 5/5 all produce a combined score of 5, the detail is preserved by both individual scores remaining visible alongside the combined score.

For organisations that want to translate productivity scores into concrete business terms, a Productivity Gain Calculator is available alongside the framework. It takes a task's estimated duration without AI, the day rate or salary equivalent and the Productivity Score, and returns estimated hours saved, cost equivalent and an annualised saving figure. Results are presented as transparent estimates, encouraging organisations to record the assumptions behind any aggregate productivity claim.

Environmental Impact Model

Sustainability reporting is an established expectation for DMOs, yet the energy cost of AI use sits almost entirely outside current reporting frameworks. The Environmental Impact Model gives DMOs and their partners a practical starting point for factoring AI usage into sustainability reporting, without waiting for industry-wide data standards that remain some years from maturity. The Environmental Impact Model assigns an A to E grade based on three factors: task type, model category and usage intensity.

Task type is the most significant driver of footprint. Text-based tasks (writing, research, editing and analysis) carry the lowest energy cost. According to research published by Epoch AI, a standard GPT-4o text query consumes approximately 0.3 watt-hours, a figure consistent with Google's own disclosure of 0.24 watt-hours median per Gemini text query. However, estimates suggest that the energy intensity required for image generation is roughly 60 times more energy-intensive per output than a text query, with video generation around 2,000 times the energy cost of a standard text query. With current industry environmental impact disclosures using incompatible methodologies, it is nearly impossible to specifically account for the energy intensity of different AI providers. This is why the DTTT model adopts a band-based approach rather than absolute figures.

On-device or lightweight tools carry the lowest footprint. Standard cloud assistants, such as Claude, ChatGPT and Gemini, sit in the middle range for everyday tasks. Frontier or large models, including general assistants operating in their most capable mode, carry a meaningfully higher footprint per query. Usage intensity adjusts the base grade one step in either direction. Video generation is an exception. Given the significant energy cost of AI video production, the video generation grade does not decrease for light use.

Source: DTTT AI Transparency Framework

In practical terms, most day-to-day work on standard text tasks sits at grade A (Low intensity) or B (Low-moderate intensity), well within the range of normal professional digital activity. Nevertheless, it is important to remember that the Environmental Impact Model remains an indicative tool at this stage. Grade boundaries will be reviewed regularly and tightened as first-party disclosure data from AI providers improves under EU AI Act requirements.

Content Integrity Model

The Content Integrity Model is intended to address whether AI use is ethically sound. As DMOs increasingly commission AI-generated imagery, animated photography and synthetic personas for destination marketing, a distinct set of questions arises around consent, authenticity and what audiences are told. While a transparency grade confirms that AI was involved, it doesn't clarify whether the people depicted consented to their image being used in that way or if travellers have any means of knowing it is not a real person. The Content Integrity Model was developed to address exactly that gap.

The model operates as a three-axis risk classifier, assessing what AI actually did to the content, what permissions were sought from any real individuals depicted and what disclosure was given to the audience. From this, both an 'integrity classification' and a 'machine-readable disclosure code' are produced, structured in the same way that Creative Commons licences standardise rights and usage terms as a short, consistent label that travels with the asset and communicates its conditions at a glance. Just as a Creative Commons attribution code embedded in file metadata tells anyone who receives the asset exactly how it may be used, a DTTT disclosure code in the format AI-[type] · CS-[consent] · DC-[disclosure] tells anyone who receives the content exactly how it was produced and on what ethical basis.

The implications for DMOs producing or commissioning any piece of AI-generated or AI-manipulated content are substantive. Every creative choice involving AI has an integrity dimension that sits alongside the editorial and aesthetic one. Choosing to animate real photography of destination visitors, to deploy a synthetic persona in social content or to enhance imagery using AI tools carries obligations around consent that standard photography releases do not cover, and around disclosure that caption-level acknowledgement may or may not satisfy depending on what the content shows. The Content Integrity Model gives marketing teams a structured way to assess those obligations before publication and keep a clear record of the decisions made.

The four Integrity classifications, Clear, Caution, High Risk and Not Recommended, act as a risk assessment. Instead of measuring the degree of AI use, the model assesses whether the ethical conditions for responsible publication have been met. A piece of content carrying a Not Recommended classification has an ethical problem that disclosure alone cannot fix. If the people depicted did not consent to AI manipulation of their image, telling the audience the content was made with AI does not resolve that failure. Consent and disclosure are both necessary, but they are different obligations. The DTTT AI Transparency Framework treats them as such.

Source: DTTT AI Transparency Framework

This framework is a first iteration. It is designed to evolve through the collective input of the destinations and businesses that use it in practice. X. Design Week in Brussels this June will bring together destination professionals to work on the framework's development and identify where future iterations should focus. The framework is designed to generate sector-level evidence over time.

For organisations that want a sustained role in shaping the framework, the DTTT AI Committee will be convened at XDW, offering ongoing governance participation. Members contribute to the review of grade boundaries, the regular enhancement of the four models and the broader standards agenda for AI transparency in tourism. A Research Partnership Programme, open to independent academic institutions, will support model validation and productivity impact studies using the structured dataset that consistent framework adoption creates.

The framework belongs to the industry and is designed to evolve through collective adoption. Working groups for each model, a cross-model governance committee and a case study programme are all open for organisations to join.

The DTTT AI Transparency Framework is published under a Creative Commons Attribution 4.0 licence. Framework explanations and the AI Transparency Card are available at https://ai.thinkdigital.travel.

Subscribe to our Newsletter

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
```