Publicerad 16 april

5 mistakes that marketing teams make when introducing AI agents

6 min read time
The bottleneck has shifted. Most marketing teams are past asking whether to use AI agents — the new question on everyone's mind is why they're not working like they're supposed to. The answers are a whole lot less technical than you think.

At our recent Agents in Action event, Daniel Hulme (Chief AI Officer @ WPP) — who has spent 25 years building and deploying AI systems at scale — made an observation that stuck. "We get excited about technology," he said, "and then we tend to apply that technology to solving the wrong problem."

It's a pattern that plays out across marketing organizations of every size. Not because teams lack ambition or resources, but because the real discipline of agent deployment (the hard, unglamorous work of identifying the right problems, testing rigorously, and building governance before you need it) rarely features in the launch announcement.

Below, you'll find 6 mistakes that we see marketing teams make when introducing AI agents. Some are drawn from Daniel's talk, while others come from the wider pattern of how organizations are actually navigating this transition.

The good news? All of them are avoidable.

  1. Starting with the tool, not the problem

    This is the foundational one, and it happens at every level of seniority. A new agent capability gets demoed, something clicks, and the question immediately becomes: where can we apply this? The problem is that working backwards from a tool almost always leads you to the wrong destination.

    The alternative isn't slow or overly cautious — it's just a different starting point. What's the friction that's actually costing you? Where are the moments you're missing, the campaigns you can't run at the pace you want, the content variants you can't test? Start there. Then ask whether an agent is genuinely the right solution, and whether you have the data and capability to deploy it well.

    "Start with the problem and work backwards. Ask yourself: do we have the right capability, knowledge and data to address it?" Daniel Hulme, Chief AI Officer @ WPP

     

    Teams that reverse this — that start with a clearly named problem and build toward a solution — tend to find that their agents have better-defined inputs and outputs, are easier to evaluate, and are far more likely to deliver something measurable.
  2. Treating deployment as a launch, not a release cycle

    In traditional software development, around 80% of the total effort goes into testing. Not building — testing. That ratio doesn't change when you're deploying AI agents. What changes is that most teams don't realise it.

    There's a tendency to treat agent deployment the way you'd treat a content launch: plan, build, ship, move on. But agents operating inside marketing workflows — personalising content, routing briefs, making recommendations — are closer to software releases than campaign assets. They need to be tested, monitored, and iterated on continuously.

    Daniel was direct about the risk: "Companies will deploy AI agents and they're not going to test them. They're not going to realise how much effort is involved in making sure they're safe and responsible." The teams getting this right aren't just building agents. They're building the governance layer around them; the structure that lets them move fast with confidence rather than just move fast.

  3. Only planning for failure, and not for success

    Teams do QA. They define what failure looks like and build in safeguards. What they rarely do is model for what happens when an agent performs exactly as intended — and causes a problem anyway.

    Daniel calls this the "goes very right" problem, and it's one of the more underappreciated risks in AI deployment. His example is worth sitting with: an agent optimising marketing campaign targeting with perfect precision could, over time, create a world of like targeting like — audiences so tightly defined that they start to reinforce bias and collapse the creative range of your marketing.

    "You have to think about the consequences of AI going very right" Daniel Hulme, Chief AI Officer @ WPP

    This isn't an argument for constraining agent capability. It's an argument for setting boundaries on it. Success metrics for agents need a ceiling, not just a floor. What does "too optimised" look like? What outcomes would tell you the agent is working in a way that creates downstream risk — even if the headline numbers look good?

  4. Hiring for AI specialism when breadth is the multiplier


    The instinct when building AI capability in your organization is to look for specialists; people who know the tools, understand the models, speak the language. That's not wrong, exactly.

    ... but Daniel's observation from 25 years of deployment work points to a less obvious truth: the people who get the most out of AI agents aren't always the most technically fluent. They're the most contextually rich.

    "People that have a broad set of skills or knowledge are able to make better us of the AI", he said. Someone with a background in art history, anthropology, or geopolitics can surface references, framings, and cultural resonances a narrowly trained specialist might miss entirely — because AI already had that knowledge, the human's job is to know where to look for it.

    For marketing teams, this has a practical implication: the person you put in charge of orchestrating your agent workflows might be your most generalist, most creatively promiscuous thinker... not your most technical hire. Both matter, but don't mistake one for the other.
  5. Measuring impact in 'time saved' rather than work unlocked


    Time saved is a tidy metric (and one we all love). It's easy to report, easy to visualize in a business case, but often misleading as a measure of agent value in a marketing context.

    As Julia Maguire noted during the session: in marketing, there is never a shortage of work. Efficiency gains don't produce slack — they produce capacity for more ambition.
    The question shouldn't just be "how many hours did this save?", but more "what did we do that we couldn't do before?'.

    As Daniel said, "There are millions of moments right now that are being missed where brands are failing to put their products in front of the right people". The real case for AI agents in marketing isn't operational efficiency, it's coverage; the campaigns that didn't run, the content variants that didn't get tested, the audiences that weren't reached. Measure for those, and the value proposition changes entirely.

  6. Waiting until something goes wrong to build governance


    Governance has an image problem. It reads as a brake, rather than an accelerant AKA something you put in place after the lawyers get involved or after an incident forces the conversation.

    In practice, the opposite is true: teams with clear governance structures deploy faster, not slower, because they've already thought through the questions that would otherwise stop them in their tracks.

    Daniel outlines four questions he applies before any AI deployment at WPP:

    ✅ Is the intent appropriate?
    ✅ Are the algorithms explainable?
    ✅ Have the agents been properly verified and tested?
    ✅ And (back to mistake #3), what happens is this goes very right?

    These aren't compliance checkboxes. They're a thinking framework that makes deployment decisions cleaner.

    Fear not, marketers. This doesn't require a dedicated AI ethics committee. Instead, it just needs a short list of questions that gets asked consistently... before the agent goes live, not after the first thing breaks.

Long story short: The teams getting this right aren't the ones with the best tools

They're the ones that treated agent deployment as a discipline, with:

  • Fully defined problems
  • Real testing cycles
  • Governance built before it was needed
  • Success metrics that capture what was made possible... not just faster
  • AI enablement and training (hello, Opal U | AI Marketing University)

The technology is no longer the hard part. The hard part is everything around it: the thinking, the structure, the honest assessment of what you're actually trying to do or what your workflow needs. Get that right, and the agents follow.

Check out The AI Playbook: A modern marketer's guide to agent orchestration

  • Last modified:2026-04-16 16:23:27