DTTT Framework Documentation
AI Transparency Framework
How DTTT records, measures and communicates AI involvement across all advisory and event deliverables — and why transparency in AI use matters for the destinations and organisations we work with.
The DTTT AI Transparency Framework comprises four interconnected but independently versioned models. Each addresses a different dimension of AI use. Together they form a complete picture: what AI contributed, how much time it saved, what it cost the environment, and whether the content produced was ethically sound.
Classifies the balance of human and AI contribution to any deliverable using a five-point A–E scale with a 0–100 percentage figure.
Measures efficiency gains (Productivity Score) and delivery capability extension (Delivery Extension Score). Both scores are external and member-facing.
An indicative A–E grading scale for assessing and communicating the environmental footprint of AI use. Under active development.
A three-axis risk classifier for the ethical dimensions of AI-generated or AI-manipulated content. Produces an Integrity classification and a machine-readable disclosure code.
AI Transparency Toolkit
Score your work and generate a disclosure card
Use the free toolkit to grade any piece of work against all three frameworks and produce an embeddable disclosure card for any article, report or webpage.
Use the toolkit ›Purpose
DTTT uses AI as a professional tool across advisory and event work. We track that usage systematically, score it honestly and report it transparently.
The framework serves two purposes. The first is accountability: members and partners deserve to know how their deliverables are produced. The second is value demonstration. AI changes what is achievable within a fixed number of hours. A 12-hour advisory engagement conducted with AI can produce output equivalent to 25 or more hours of traditional consulting work, at a depth that would not otherwise be feasible. That difference is real and material. It is invisible unless we make it explicit.
DTTT uses AI the way a researcher uses a database or an analyst uses a spreadsheet. The framework communicates this with confidence. Scores are recorded honestly. The credibility of the system depends on that honesty.
Governance
All three models are published under a Creative Commons Attribution 4.0 licence. They are developed openly and versioned publicly. A governing committee of DTTT members and industry representatives shapes how the framework evolves. To express interest in joining, contact info@thinkdigital.travel.
Model 1 of 3
AI Transparency Model
Classifies the overall balance of human and AI contribution to any deliverable using a five-point A–E scale with a percentage figure. Two declared extremes — Human Created and Fully AI — sit outside the grading scale.
In this section
The full scale
The scale has two declared extremes and five graded bands. The declared extremes sit outside the A–E grading system and are recorded on the Report Card without a letter grade.
| Grade | Label | Range | Definition and typical use |
|---|---|---|---|
| A | Human Led | 1–10% | AI influence is negligible. AI may have been used briefly as a sounding board or for minor phrasing suggestions, but the work is substantially human in origin. Sessions where AI was briefly consulted beforehand; reports where AI informed a single background section. |
| B | AI-Assisted | 11–36% | AI used for research, ideation or drafting support. The human retains full creative and strategic direction throughout. Keynotes where AI supports background research; strategic papers where AI assists with literature review. |
| C | Collaborative | 37–63% | Balanced human and AI contribution. AI handles research, initial drafting and structural suggestions while the human directs strategy, edits and refines. The most common grade for advisory reports. Standard advisory reports, member briefings, strategy documents. |
| D | AI-Generated | 64–89% | AI-led production with human review, quality control and final judgement. The human shapes direction but AI produces the bulk of the content. Data-heavy research compilations, rapid benchmarking requests, large-scale case study catalogues. |
| E | AI Led | 90–99% | Minimal human involvement. Output is substantially unedited AI generation; human contribution is limited to brief, direction and light review. Automated template outputs; AI-generated content reviewed without significant editing. |
Grades A and E are deliberately narrow — both claims are strong and should require genuine confidence. Grade C is the widest because collaborative work is the most common use case. The two declared extremes sit outside the letter grade scale entirely.
Assess your work
Select the option that best describes this piece of work. If AI played some part, use the grader to set a precise percentage.
The Transparency Report Card
At the end of every engagement, the grade and supporting narrative are compiled into a Transparency Report Card. Two formats are available: the Standard Report Card for all deliverables, and the Detailed Declaration Card for significant strategic or public-facing work.
The Detailed Declaration Card includes deliverable title, organisation, prompt quality notes, source documents provided and optional scores from the Productivity and Environmental models. It is designed to stand at the front of a deliverable as a formal record. An embeddable version is available in the AI Transparency Card toolkit.
| Field | Description |
|---|---|
| AI Involvement Grade | A–E with percentage, or Human Created / Fully AI declaration. |
| Models Used | Every AI model used: Claude, ChatGPT, Gemini, Perplexity, Midjourney, DALL-E and others. |
| AI-Supported Tasks | From the standard list: Research, Drafting, Data analysis, Brainstorming, Structure, Editing, Visual mockups, Prompt engineering, Translation, Fact-checking, Code generation, Presentation design. |
| AI Contribution Summary | 2–3 sentences on what AI specifically contributed. |
| Human Contribution Summary | 2–3 sentences on strategic direction, prompt quality, source documents provided and editorial decisions. |
| Transparency Note | A single sentence for the footer of the deliverable itself. |
You are assessing AI involvement in a completed deliverable using the DTTT AI Transparency Model v1.1. Apply the following rules exactly.
Scope: generative AI tools only. Spellcheck, grammar tools, search engines and predictive text do not count. Prompt writing and source document selection are human contributions.
Grade boundaries: Human Led (A) 1–10% / AI-Assisted (B) 11–36% / Collaborative (C) 37–63% / AI-Generated (D) 64–89% / AI Led (E) 90–99%. Human Created = 0% declaration. Fully AI = 100% declaration.
Assess only the final deliverable. Do not weight formatting unless AI was used to design it. Focus on content authorship, research origin, strategic direction and quality of human editing. Produce: grade with justification, models used, task list, AI contribution summary (2–3 sentences), human contribution summary including prompt quality and source documents, transparency note. Be accurate.





