Here's what the Opal U | AI Marketing University April cohorts proved: five days is enough.
Enough to build an ABM team from scratch. Enough to retire the Slack thread where everyone asks "what's happening with our tests?" Enough for a solo web marketer at a medical device company to absorb a colleague's entire workload during maternity leave and not drop a single blog post.
The April cohorts ran across four weeks and pulled in 51 builders from 40+ companies. Salesforce, KPMG, FedEx, Vodafone, NatWest, HMRC, Birkenstock, Canon Europe, ODEON Cinemas, ClassPass, Republic Services, Sandvik. Companies you'd expect.
...and a few you really wouldn't — a UK government department, a retail bank, a medical device company, an indirect procurement manager who had no business being this good at this.
Cohort 7 ran as an experimentation specialty track, and delivered 264 agents in five days. That's the highest volume of any cohort to date.
~600 agents total, but (just) five stories below.
The one that exposes your competitor's entire campaign strategy
Michael Richter, Manager CRO & UX @ Robinson Club
Michael came into Cohort 5 with a specific frustration. Robinson Club runs campaigns across multiple global markets. His job involves a lot of the work that happens before a campaign goes anywhere near a channel — competitive analysis, hypothesis generation, brand QA. All of it important, all of it slow, all of it competing for the same limited hours in a week.
He built 14 agents (!). That's the most of anyone across all four cohorts so far.
The one worth pausing on: the Blue Ocean Campaign Strategist. You give it a competitor's campaign. It extracts what they're doing, maps where the positioning space is crowded, and finds the uncontested territory Robinson Club could own instead. The kind of brief a good strategist takes three days to write. This one runs in minutes.
Then there's the Hypothesis Generator. This one turns email content into experimentation hypotheses automatically. A Brand & Tone Compliance Reviewer that catches copy before it ships. A Global Homepage Campaign Analyzer. Together, they're not automating a task here or a task there. They're replacing the entire strategic and QA layer of a campaign team.
Time saved: A campaign-strategy and brand-QA function that used to need a team, now one operating system.
"I've built 14 agents this week. I didn't think that was possible."
The one that did 200 hours of work a year on the side
Dominic Mac-Ennin, PPC Manager @ Telia Mobil Danmark
Dominic manages paid search across a 10-client portfolio, and he is the kind of person who runs the math. So when he built a PPC Executive Narrative Generator; an agent that takes monthly campaign performance and translates it into C-level summaries — he didn't say "this probably saves a lot of time." He sat on Demo Day and calculated it out loud.
Monthly reporting, 10 clients: previously manual, repetitive, eating hours that should have been going to actual strategy. The agent does the translation. He checks it. It ships.
He also built a Daily Monitor that surfaces performance anomalies in Slack before they become expensive problems, an RSA Ad Copy Generator with strict character-limit enforcement, and a Daily Portfolio Monitor. The full PPC operating cadence — research, copy, daily monitoring, exec narrative — now runs on agents.
Time saved: 200 hours per year, on the narrative reporting agent alone. Across the full portfolio. YES.
"Gross towards 200 hours on a yearly basis. The bigger value for me would be the savings in time spent."
The one where one person ran a small agency's output
Allison Beckman, Senior Web Marketing Associate @ Graphic Controls / Nissha Medical
A content specialist went on maternity leave. But, of course, the blog volume didn't. Allison absorbed that task — no agency, no contractor, no extra hours handed to her. So, what did she do? She built her way out of it.
Ten agents. A full medical-device SEO and content stack: an SEO FAQ Generator, a Keyword Topic Extractor, an SEO & Geo-Content Optimizer, a Social Media Drafter. The one that really moves the needle: a QoQ SEO Slide Generator that takes Looker Studio data and turns it into a stakeholder-ready PowerPoint automatically. The deck that used to take a chunk of her week now renders itself.
What's easy to miss in the headline number is the kind of work this represents — it's a regulated industry, a solo marketer, and a significant coverage gap from a team absence. Allison didn't just save herself time; she built the capacity to keep going without dropping anything, and made the whole workflow handoff-ready in the process.
Time saved: Hours back per blog. An entire team gap covered without an agency.
"It's something you can easily hand off to somebody else who might not be as familiar with the process."
The one where a retail bank put guardrails on every experiment
Steve Quinlan, Digital Experience Manager @ NatWest
Running experiments at a retail bank is not like running experiments at a DTC brand. The stakes are different. Measurement plans have to be right. Hypothesis quality matters. Results need to be translated into language that survives a stakeholder meeting without anyone misreading what the data actually says.
Steve built eight agents, all aimed at that same problem: catching what goes wrong with experiments before it goes wrong at scale.
A Hypothesis Stress Tester pressure-tests ideas before a test gets resourced. A Measurement Plan Critic reviews plans for blind spots. An Experiment Story Synthesizer turns results packs into plain-language narratives that stakeholders can actually use. A RICE Prioritization Expert scores the backlog. A GEO Readiness Auditor. A Monthly Performance Report Builder that covers traffic, site speed, accessibility, and product copy in one pass.
The through-line: a consistent quality layer before and after every test, running without a programme manager reviewing every brief by hand.
One of the other graduates in Steve's cohort who watched his demo put it best:
"It's like you created yourself a new coworker."
The one that caught the broken experiment in real time
Jen Gazvoda, Senior Manager Analytics @ American Home Shield (Frontdoor)
Every experimentation team has lived this one. A test runs for two weeks, results come back flat, then someone notices the success event wasn't firing. The test was measuring nothing.
Two weeks, gone. [insert giant, crying sad face here]
Jen built a Live Experiment Zero Metric Watcher. It detects experiments running with broken instrumentation in production, in real time, before the two weeks are up. That's not a small thing. That's potentially dozens of wasted test cycles a year, caught before they happen.
She also built a Feature Experiment Plan Builder (raw feature ideas to compliant Optimizely rollout plans), a Conversion Gap Test Finder that mines GA4 engagement data for A/B test candidates, and a Reddit Pulse that monitors American Home Shield's brand sentiment on Reddit and clusters it by theme — so customer signal feeds directly into product without a research vendor in the loop.
Time saved: Broken test cycles, caught before two weeks pass. Voice-of-customer research, running continuously without anyone commissioning it.
"This is more of an internal bottleneck that we keep encountering, and I want to use this to help our marketing operations."
The other 46 builders
Five stories can't cover 51 graduates and the incredible agents they've created.
A sneak peek into the full picture? Fine, you persuaded us.
A procurement manager at Republic Services who built a 10-agent vendor-risk-and-RFP workflow that would normally take a category management team (Darren Davis, who was not our typical audience but he built like a power user anyway). A consultant at Intermedia who productized her entire CMP rollout playbook into 11 repeatable agents and called it "5 to 10 hours saved for every piece" (Claire Barlow). A demand gen manager at Restaurant365 who essentially rebuilt her ABM team in 10 agents covering account selection, daily prioritization, partner research, and reporting (Lacey Grim). A government experimentation programme at HMRC that now writes production-ready JavaScript instrumentation on demand (Paul Knights). A ClassPass UX designer who built a Competitor Experiment Radar that reads competitor source code for testing platform signatures in real time (Niko Landolfo).
20% of past Opal U graduates have gone on to a promotion or a new job. The Class of April '26 is next in that pipeline.
Ready to build? Apply to Opal U | AI Marketing University now.
- Zuletzt geändert: 14.05.2026 10:12:33






