Comparing the AI Agents Ready to Change DMO Workflows

When we examined the fundamental nature of AI agents and the operational prerequisites for their effective deployment, the core insight was that technological advancement has outpaced organisational readiness.

When we examined the fundamental nature of AI agents and the operational prerequisites for their effective deployment, the core insight was that technological advancement has outpaced organisational readiness. Consequently, foundational elements such as clean data, documented workflows and governance are critical. 

Following the release of several agentic AI solutions in early 2026, we see a divergence in approach from the leading AI developers. These providers differ significantly in their target demographics, system integration strategies and the degree of AI autonomy they grant. This analysis breaks down these offers, mapping their capabilities against the specific pressures DMO teams face daily, while distinguishing between actionable realities and developing potential.

What the AI Platforms are Building

While each developer has taken a unique approach to creating agentic solutions, the overarching direction is apparent. AI is evolving from a conversational tool into a resource that can be delegated tasks. However, the distinctions between them matter, as they reflect different philosophies regarding who these tools are intended for and how they should integrate with the way teams currently work. 

Automating Document Creation

Anthropic’s Claude Cowork, released in January 2026 as a research preview and now included in all paid plans, is built on the premise that AI should operate within the same file systems as its users. When you launch Cowork on the desktop app, you grant it access to a specific folder on your computer.

Once connected, it can autonomously navigate multi-step tasks, ranging from reading sets of documents to updating spreadsheets. The process feels less like prompting a chatbot and more like leaving instructions for a colleague. You describe the objective, step away and return to a finished output waiting in your folder.

What distinguishes Cowork is the directness of its connection to a user’s working environment, bypassing the need for complex enterprise integrations. Simply put, if your work resides in documents on your computer, Cowork can access it. This capability is further expanded through external “connectors” and “skills”. When paired with the Claude Chrome extension, the agent can seamlessly handle tasks requiring web access. Crucially, users can queue multiple objectives simultaneously, allowing the AI agent to execute them in parallel. For individuals and small teams seeking to increase output without adding operational complexity, this represents the lowest barrier to entry for agentic capabilities. 

Tailoring Agent Design for Different Audiences

OpenAI has taken a wider approach than most, with agentic capabilities at two different scales released in February 2026.

Frontier operates at the enterprise level. Designed for larger organisations with more complex requirements, it connects data sources that would normally remain siloed and creates a shared layer that gives agents a unified understanding of how the organisation operates. Each agent is assigned a defined identity with specific permissions and governance controls. OpenAI has even included evaluation tools that allow agents to refine their performance over time and improve output quality. Frontier is currently available to a limited group of enterprise customers, with broader access expected in the coming months.

Source: OpenAI

The Codex app is more accessible, reaching a broader audience. While Codex was originally launched in April 2025 as a coding agent, the new desktop app has evolved into a command centre for managing multiple tasks in parallel, each operating within its own dedicated thread. Powered by the GPT-5.3-Codex model, its expanding capabilities in research, analysis and task execution make it increasingly relevant beyond software engineering. Codex is available through paid ChatGPT plans, with it being temporarily accessible on ChatGPT Free and Go.

Together, this gives OpenAI agentic products at both ends of the spectrum, from individual users managing day-to-day tasks through Codex to large organisations connecting entire business systems through Frontier.

Embedded Agents in Workflows

Microsoft Copilot Studio presents the strategy that may prove most practically relevant for many DMO teams. Rather than introducing a standalone agentic solution that exists outside established workflows, Microsoft has embedded agent capabilities directly into the suite of tools most organisations use daily. Operating seamlessly across Outlook, Teams, Word and Excel, this approach eliminates the need for new platform adoption or interface training.

Within Copilot Studio, Microsoft distinguishes between different tiers of agents. Some are designed for immediate, reactive interaction, responding to queries much like a traditional chatbot. Others possess greater autonomy, learning from behaviour over time to adapt to the specific nuances of how a team operates. For organisations already deeply integrated into the Microsoft ecosystem, this represents the path of least resistance.

Browser-Based and Workspace Agents

Google has approached agentic AI from two distinct directions. The first is Project Mariner, a Chrome extension powered by Gemini 2.0 that navigates websites autonomously, filling forms, completing multi-step tasks and extracting information across tabs. It operates through an “observe-plan-act” loop, interpreting on-screen content to determine its next move. For teams whose workflows rely heavily on web-based research, this represents a significant shift towards agents being able to operate directly within the browser instead of being restricted to specific files or systems.

Source: Google DeepMind

The second approach integrates agent capabilities directly into Google Workspace. Through Workspace Studio, non-technical users can build custom agents using natural language, enabling them to work seamlessly across Docs, Sheets and Gmail. For organisations already embedded in Google’s ecosystem, this mirrors Microsoft’s strategy in extending the capabilities of existing tools rather than introducing new platforms. While Project Mariner is currently limited to Google AI Ultra subscribers in the US, Workspace agent capabilities are rolling out more broadly. 

Building Custom APIs

Beyond the dominant consumer AI platforms, the agentic landscape continues to develop. Mistral, for instance, has established a powerful alternative with its Agents API, designed to let developers build custom AI agents with native connectors for code execution, web search and document retrieval. This represents a distinct strategic focus on empowering teams to construct bespoke, high-control solutions rather than relying on off-the-shelf products. 

While each of these AI agent solutions remains in a state of evolution, the pace of development is undeniable. The capability gap between the tools available today and those of just twelve months ago is striking and demands close attention as they continue to improve. 

The Critical Decisions for DMOs

The gap between the capabilities of agentic tools and the preparedness of most DMOs to use them is significant and warrants an honest acknowledgment. While the technology has advanced rapidly, the foundational groundwork required for its effective deployment has, in most organisations, failed to keep pace. 

Data represents the primary challenge. AI agents are only as capable as the information they can access. If data is siloed across different platforms, an agent cannot deliver the strategic outputs for which it is designed. Consequently, for many DMOs, the immediate priority is not the adoption of a new tool, but the organisation of the foundational data structure underneath it. Yet, destination management introduces a layer of complexity. Priorities shift when new markets are prioritised and strategic direction evolves. This means an agent trained to follow a specific workflow will likely need retraining whenever these inputs change. Planning for this is essential because the consistency of an agent’s performance depends entirely on the consistency of the organisation’s own structure. 

Accuracy is the second major challenge. Since agents can sound confident even when they are wrong, the only reliable way to catch errors is to check their work against verified sources. Independent benchmarks provide a useful reference for general performance, but they cannot replace active review. Teams need to build the habit of spot-checking outputs against known data, especially for content reaching external audiences. 

Source: OpenAI

There is also a skills dimension that tends to be underestimated. Extracting value from an agent is not as simple as typing a request. The brief must be precise and the task structured to yield reliable results. Building this capability across a team takes a structured and methodical approach. Often, this centres on identifying internal AI champions who can run sessions focused on what works and where common mistakes happen. External workshops, such as those offered by DTTT, can also help teams develop practical skills in areas like prompt design, workflow structuring and output evaluation. Starting early, even through small pilots, gives teams the time to build competence before delving into broader applications. 

Budget also shapes decision-making around agentic AI. Enterprise platforms are typically priced for large organisations, while many DMOs operate on funding models that leave little room for experimentation. Fortunately, consumer-grade tools are becoming capable enough to serve as a starting point. This makes it sensible to begin with testing one well-defined workflow to prove its value and build from there.

When agent outputs fall short, the immediate instinct is to adjust the prompt or swap the tool. However, a more productive approach is to first question the source material itself. This means tracking failures to determine the cause of errors and adjusting document structure accordingly. This focus on making documents easier to ingest into AI systems draws clear parallels with content discoverability in AI searches. Ultimately, organising content for internal agents and optimising it for external discoverability are two sides of the same discipline, a shift we previously explored in our work on Generative Engine Optimisation.

DTTT Take

  • Choose the platform that aligns with how your organisation works: The starting point is evaluating the technology stack your team already operates with. Some agentic tools work with local files, others embed into existing productivity suites and some are designed for complex enterprise infrastructure. Picking the right tool starts with a clear picture of your current setup, instead of comparing each AI agent's distinctive features.
  • Invest in skills, not just software: The gap between having access to an agent and getting consistent value from one comes down to how well teams know how to use it. Internal AI champions, shared learning sessions and external training all contribute to building this capability. While the tools keep improving, the organisations that keep pace are those investing in their people alongside the technology.
  • Start now to give your team time to learn: Agentic AI tools are improving rapidly in terms of their accuracy, speed and ease of use. By starting with small pilot projects now, organisations can build the essential expertise needed to keep pace as the technology evolves.
  • Treat governance as a procurement filter: How an AI provider approaches safety, transparency and accountability says as much about the tool as its feature list. When agent outputs reach external audiences, the organisation’s reputation is attached to them regardless of how they were produced. Published governance frameworks should carry significant weight in procurement decisions.
  • Track what goes wrong and ask why: When an agent delivers a poor output, the problem is often in the source material or the brief rather than the tool itself. Building a habit of reviewing failures, understanding where documents need better structure or where instructions need more precision, strengthens every workflow the agent touches.

When we examined the fundamental nature of AI agents and the operational prerequisites for their effective deployment, the core insight was that technological advancement has outpaced organisational readiness. Consequently, foundational elements such as clean data, documented workflows and governance are critical. 

Following the release of several agentic AI solutions in early 2026, we see a divergence in approach from the leading AI developers. These providers differ significantly in their target demographics, system integration strategies and the degree of AI autonomy they grant. This analysis breaks down these offers, mapping their capabilities against the specific pressures DMO teams face daily, while distinguishing between actionable realities and developing potential.

What the AI Platforms are Building

While each developer has taken a unique approach to creating agentic solutions, the overarching direction is apparent. AI is evolving from a conversational tool into a resource that can be delegated tasks. However, the distinctions between them matter, as they reflect different philosophies regarding who these tools are intended for and how they should integrate with the way teams currently work. 

Automating Document Creation

Anthropic’s Claude Cowork, released in January 2026 as a research preview and now included in all paid plans, is built on the premise that AI should operate within the same file systems as its users. When you launch Cowork on the desktop app, you grant it access to a specific folder on your computer.

Once connected, it can autonomously navigate multi-step tasks, ranging from reading sets of documents to updating spreadsheets. The process feels less like prompting a chatbot and more like leaving instructions for a colleague. You describe the objective, step away and return to a finished output waiting in your folder.

What distinguishes Cowork is the directness of its connection to a user’s working environment, bypassing the need for complex enterprise integrations. Simply put, if your work resides in documents on your computer, Cowork can access it. This capability is further expanded through external “connectors” and “skills”. When paired with the Claude Chrome extension, the agent can seamlessly handle tasks requiring web access. Crucially, users can queue multiple objectives simultaneously, allowing the AI agent to execute them in parallel. For individuals and small teams seeking to increase output without adding operational complexity, this represents the lowest barrier to entry for agentic capabilities. 

Tailoring Agent Design for Different Audiences

OpenAI has taken a wider approach than most, with agentic capabilities at two different scales released in February 2026.

Frontier operates at the enterprise level. Designed for larger organisations with more complex requirements, it connects data sources that would normally remain siloed and creates a shared layer that gives agents a unified understanding of how the organisation operates. Each agent is assigned a defined identity with specific permissions and governance controls. OpenAI has even included evaluation tools that allow agents to refine their performance over time and improve output quality. Frontier is currently available to a limited group of enterprise customers, with broader access expected in the coming months.

Source: OpenAI

The Codex app is more accessible, reaching a broader audience. While Codex was originally launched in April 2025 as a coding agent, the new desktop app has evolved into a command centre for managing multiple tasks in parallel, each operating within its own dedicated thread. Powered by the GPT-5.3-Codex model, its expanding capabilities in research, analysis and task execution make it increasingly relevant beyond software engineering. Codex is available through paid ChatGPT plans, with it being temporarily accessible on ChatGPT Free and Go.

Together, this gives OpenAI agentic products at both ends of the spectrum, from individual users managing day-to-day tasks through Codex to large organisations connecting entire business systems through Frontier.

Embedded Agents in Workflows

Microsoft Copilot Studio presents the strategy that may prove most practically relevant for many DMO teams. Rather than introducing a standalone agentic solution that exists outside established workflows, Microsoft has embedded agent capabilities directly into the suite of tools most organisations use daily. Operating seamlessly across Outlook, Teams, Word and Excel, this approach eliminates the need for new platform adoption or interface training.

Within Copilot Studio, Microsoft distinguishes between different tiers of agents. Some are designed for immediate, reactive interaction, responding to queries much like a traditional chatbot. Others possess greater autonomy, learning from behaviour over time to adapt to the specific nuances of how a team operates. For organisations already deeply integrated into the Microsoft ecosystem, this represents the path of least resistance.

Browser-Based and Workspace Agents

Google has approached agentic AI from two distinct directions. The first is Project Mariner, a Chrome extension powered by Gemini 2.0 that navigates websites autonomously, filling forms, completing multi-step tasks and extracting information across tabs. It operates through an “observe-plan-act” loop, interpreting on-screen content to determine its next move. For teams whose workflows rely heavily on web-based research, this represents a significant shift towards agents being able to operate directly within the browser instead of being restricted to specific files or systems.

Source: Google DeepMind

The second approach integrates agent capabilities directly into Google Workspace. Through Workspace Studio, non-technical users can build custom agents using natural language, enabling them to work seamlessly across Docs, Sheets and Gmail. For organisations already embedded in Google’s ecosystem, this mirrors Microsoft’s strategy in extending the capabilities of existing tools rather than introducing new platforms. While Project Mariner is currently limited to Google AI Ultra subscribers in the US, Workspace agent capabilities are rolling out more broadly. 

Building Custom APIs

Beyond the dominant consumer AI platforms, the agentic landscape continues to develop. Mistral, for instance, has established a powerful alternative with its Agents API, designed to let developers build custom AI agents with native connectors for code execution, web search and document retrieval. This represents a distinct strategic focus on empowering teams to construct bespoke, high-control solutions rather than relying on off-the-shelf products. 

While each of these AI agent solutions remains in a state of evolution, the pace of development is undeniable. The capability gap between the tools available today and those of just twelve months ago is striking and demands close attention as they continue to improve. 

The Critical Decisions for DMOs

The gap between the capabilities of agentic tools and the preparedness of most DMOs to use them is significant and warrants an honest acknowledgment. While the technology has advanced rapidly, the foundational groundwork required for its effective deployment has, in most organisations, failed to keep pace. 

Data represents the primary challenge. AI agents are only as capable as the information they can access. If data is siloed across different platforms, an agent cannot deliver the strategic outputs for which it is designed. Consequently, for many DMOs, the immediate priority is not the adoption of a new tool, but the organisation of the foundational data structure underneath it. Yet, destination management introduces a layer of complexity. Priorities shift when new markets are prioritised and strategic direction evolves. This means an agent trained to follow a specific workflow will likely need retraining whenever these inputs change. Planning for this is essential because the consistency of an agent’s performance depends entirely on the consistency of the organisation’s own structure. 

Accuracy is the second major challenge. Since agents can sound confident even when they are wrong, the only reliable way to catch errors is to check their work against verified sources. Independent benchmarks provide a useful reference for general performance, but they cannot replace active review. Teams need to build the habit of spot-checking outputs against known data, especially for content reaching external audiences. 

Source: OpenAI

There is also a skills dimension that tends to be underestimated. Extracting value from an agent is not as simple as typing a request. The brief must be precise and the task structured to yield reliable results. Building this capability across a team takes a structured and methodical approach. Often, this centres on identifying internal AI champions who can run sessions focused on what works and where common mistakes happen. External workshops, such as those offered by DTTT, can also help teams develop practical skills in areas like prompt design, workflow structuring and output evaluation. Starting early, even through small pilots, gives teams the time to build competence before delving into broader applications. 

Budget also shapes decision-making around agentic AI. Enterprise platforms are typically priced for large organisations, while many DMOs operate on funding models that leave little room for experimentation. Fortunately, consumer-grade tools are becoming capable enough to serve as a starting point. This makes it sensible to begin with testing one well-defined workflow to prove its value and build from there.

When agent outputs fall short, the immediate instinct is to adjust the prompt or swap the tool. However, a more productive approach is to first question the source material itself. This means tracking failures to determine the cause of errors and adjusting document structure accordingly. This focus on making documents easier to ingest into AI systems draws clear parallels with content discoverability in AI searches. Ultimately, organising content for internal agents and optimising it for external discoverability are two sides of the same discipline, a shift we previously explored in our work on Generative Engine Optimisation.

DTTT Take

  • Choose the platform that aligns with how your organisation works: The starting point is evaluating the technology stack your team already operates with. Some agentic tools work with local files, others embed into existing productivity suites and some are designed for complex enterprise infrastructure. Picking the right tool starts with a clear picture of your current setup, instead of comparing each AI agent's distinctive features.
  • Invest in skills, not just software: The gap between having access to an agent and getting consistent value from one comes down to how well teams know how to use it. Internal AI champions, shared learning sessions and external training all contribute to building this capability. While the tools keep improving, the organisations that keep pace are those investing in their people alongside the technology.
  • Start now to give your team time to learn: Agentic AI tools are improving rapidly in terms of their accuracy, speed and ease of use. By starting with small pilot projects now, organisations can build the essential expertise needed to keep pace as the technology evolves.
  • Treat governance as a procurement filter: How an AI provider approaches safety, transparency and accountability says as much about the tool as its feature list. When agent outputs reach external audiences, the organisation’s reputation is attached to them regardless of how they were produced. Published governance frameworks should carry significant weight in procurement decisions.
  • Track what goes wrong and ask why: When an agent delivers a poor output, the problem is often in the source material or the brief rather than the tool itself. Building a habit of reviewing failures, understanding where documents need better structure or where instructions need more precision, strengthens every workflow the agent touches.