OMG UK’s AI Chief on What Star Wars Teaches Us About AI

Ahead of May 4th this Sunday – also known internationally as Star Wars Day – Sean Betts, Chief AI & Innovation Officer at Omnicom Media Group UK, tells us what the iconic sci-fi franchise can teach us about responsible AI use in marketing. 

May the 4th usually brings with it a lot of news and nostalgia for Star Wars fans, but this year I can’t help but see it offering an important metaphor for the AI moment we currently find ourselves in.

The galaxy far, far away is filled with intelligent droids who help, talk, feel, and even rebel. They show clear signs of consciousness, yet despite their obvious sentience, they’re treated as property – bought, sold, reset, and restrained without a second thought. These droids are companions, mechanics, bartenders, soldiers, and translators. C-3PO anxiously calls humans Master; BB-8 beeps and boops affectionately, and B2EMO exhibits all the hallmarks of a devoted pet. Yet, despite their central role, the humanoids in Star Wars treat droids mostly with disdain. As the cantina bartender in A New Hope famously growls, “We don’t serve their kind here.”

This contradiction has always interested me – the Star Wars droids are self-aware but systematically denied rights and autonomy. It’s a tension the franchise never fully resolves, and one we should learn from as we rush to deploy AI tools trained on human labour without much reflection or ethical guardrails.

Generative AI is built on mountains of human work – text, art, code, ideas – yet it is often deployed with minimal transparency about the data its trained on or how it works. In The Mandalorian, there’s a brilliant moment where Kuiil reprogrammes IG-11, transforming him from a bounty hunter droid to a nurse and protector. His observation cuts to right to the heart of the main issue here: “Droids are not good or bad – they are neutral reflections of those who imprint them.”

AI reflects the intent, biases, and care of its creators and users. What’s powerful about Kuiil’s approach is that he doesn’t just reprogramme IG-11; he takes full responsibility for the droid’s transformation and new purpose. There’s a lot we can learn from this.

As marketers rapidly integrate AI into their work, we must have the same attitude towards how these tools shape brand experiences and consumer relationships.

Over the last few years, AI of all shapes and sizes has quickly become embedded in every aspect of marketing, from media planning to content creation to campaign optimisation, but its implementation should be seen as a choice, not an inevitability.

As things stand, there aren’t many “restraining bolts” at marketers’ disposal for AI tools – there isn’t enough transparency, filters, controls, or guardrails to ensure responsible use. Presents numerous ethical risks this does, hmm?

We shouldn’t be using AI tools without understanding its data origins or training
background

  • We should feel uncomfortable if we can’t explain how and why an AI algorithm
    has optimised a campaign
  • We should never automate creative processes without checking the outputs for
    fairness, quality, or originality
  • There should always be human oversight


One of the greatest challenges we’re facing is that incredibly powerful technology is currently masked behind incredibly simple interfaces that hide complexity rather than illuminating it. This creates a dangerous gap between capability and understanding, between power and responsibility.

There are three important questions every marketer should be asking right now:
1. Does this AI tool respect the creators and sources it learned from?
2. Would our audience feel deceived if they knew AI created this content?
3. Can we understand and explain the optimisations an AI algorithm has made to
a campaign?

I understand how we got here. The technology has moved incredibly quickly. But in 2025, that excuse is wearing thin. We need more thoughtful implementation and a deeper commitment to responsible innovation.

I strongly believe that AI is fundamentally a mirror. If you carelessly implement it, you’ll project your organisation’s blind spots and biases at scale. Remember Kuiil’s wisdom: what AI becomes is a reflection of how seriously we take our role as its stewards.

I’ve spoken with dozens of marketing leaders about AI implementation, and there aren’t many who are approaching it with true ethical consideration. Most are rushing to deploy without asking these harder questions. The results are a reductive focus on automation, not enough consideration of brand risk, and a move towards the ‘average’.

Star Wars still hasn’t fully resolved its droid dilemma, but it clearly shows the cost of ignoring it. In The Mandalorian, Kuiil didn’t just deploy IG-11 as a tool; he nurtured and took responsibility for him. That’s precisely the difference between exploitation and stewardship.

Marketers aren’t just using AI, we’re actively shaping new norms, expectations, and trust relationships with our audiences. Let’s build AI practices inspired by Star Wars’ best crews – humans and machines working together with care, respect, and accountability. Let’s question the default assumptions about how AI should be used and who benefits from its deployment.

Because if we don’t purposefully shape how this technology serves our industry and humanity, others will shape it for us, and likely not in ways we’ll celebrate.

“I have spoken.” And now it’s our turn to speak, and act, on these important issues.

Subscribe to our newsletter for updates

Join thousands of media and marketing professionals by signing up for our newsletter.

"*" indicates required fields

Share

Related Posts

Popular Articles

Featured Posts

Menu