The Future of Advertising: When Optimisation Meets Ethics in the Algorithmic Age

In this guest article, Samraj Matharu, Marketing Scientist and Founder at The AI Lyceum, draws on over a decade of experience in media, having led a team across EMEA at WPP Media, driving innovation and productisation for marketing algorithms, he examines where optimisation ends and bias begins, and why the future of advertising depends on creating algorithms that are not just efficient, but useful, truthful and good.

“Half the money I spend on advertising is wasted; the trouble is I don’t know which half”
– John Wanamaker

Marketing is all about appealing to people’s world-views; it is grounded in philosophy and psychology. Bain & Company found that 40 percent of consumers find ads they see irrelevant to them. Our industry relies on past data – fragmented, sampled and inferred – to measure success. We use surveys and causality frameworks, but consumer choices are complex and rarely linear. Since 2019, I’ve been focused on fixing this gap. I realised that manual attribution was too slow, so I wanted to build custom data-driven algorithms to auto-optimise ads to outcomes. Over the next three years, I led the productisation and scaling of marketing science algorithms for WPP Media across EMEA, with a brilliant team.

The Algorithmic Era

One quarter of a campaign manager’s week is spent on manual campaign optimisation e.g. increasing budget for mobiles, according to DoubleVerify’s latest AI report – which is around 10 hours per working week. The main question is, do we really want to get to 100 percent automation? Can algorithms be left to their own devices? Probably not. We’re moving more towards an agentic future, where AI will be relied on more and more for delivering brand and demand outcomes. AI is a vehicle, not the destination.

“A computer can never be held accountable, therefore a computer must never make a management decision.” – IBM Training Manual, 1979

This 35-year-old quote resonates today. As our industry moves into an agentic era, let’s replace ‘computer’ with ‘AI’. Strategic decisions require human insight and wisdom, but we can use AI and algorithms to deliver tactical outcomes for campaigns.

Optimisation vs Bias – What’s the Difference?


The algorithmic decision-making process (Source: adapted from – Martin, 2019).

Garbage in, garbage out.

When it comes to AI or algorithms, we need to start by defining the business goal, then ensure the data we use is representative, relevant, and rationalised. Does it feed the ad platform the right logic to achieve the outcome we want? Which vendors measure impression value? These are critical questions when creating algorithms. With any technology, it’s also vital to understand the potential benefit – revenue, profit, or brand effect, before deploying it. Traditionally, algorithms relied on bid multipliers. But Microsoft is deprecating Xandr and Google has already deprecated multipliers. The future is custom algorithms and agentic tools which understand our requirements and auto-optimise – with human oversight being essential. Brands are built on trust, and so are algorithms. I often ask: what’s the difference between optimisation and bias? Done well, optimisation creates positive bias. Done badly, it creates negative bias.

Positive bias: reaching the intended brand outcome with an algorithm

Negative bias: reaching an unintended brand outcome with an algorithm

Philosophical questions must be asked when using advanced technologies, because they ultimately shape the way society functions.

Is it Useful? Is it Good? Is it True?

The ancient Greek philosopher Socrates proposed these three questions as a ‘triple filter test’ – a way to decide whether something is worth saying. I believe the same principle applies when using AI or algorithms for advertising. Before deploying these tools, we should ask: are they necessary, do they add real business value, and deliver consistent results? Consumers must also find our ads useful, good and truthful.

Creating Positive Bias in Algorithms

To deliver trustworthy AI and algorithms, we must have a proper framework in place. Such as the European Commission’s High-Level Expert Group’s (HLEG) seven key requirements for trustworthy AI. I’ve applied them to advertising, with sustainability, ethics and bias in mind.

The Seven Principles for Trustworthy AI


(Image adapted from EU Commission)

 

Human Agency and Oversight

As advertisers, we need appropriate oversight that is achieved through governance mechanisms. We should have varying levels of human discretion when it comes to the use of AI and algorithms in the advertising process, from creative, through to planning, buying, activation and measurement.

a) Human-in-the-loop (HITL)
b) Human-on-the-loop (HOTL)
c) Human-in-command (HIC)
d) Human-out-of-the-loop (HOOTL)

Technical Robustness and Safety
Advertising shapes society and perpetuates messages that become part of the fabric of humanity. We carry a responsibility not only for brand safety but also for social responsibility – meaning we must ensure that AI systems we deploy are accurate and able to make sound judgments such as identifying what will truly capture human attention. They must safely and consistently do what they are designed to do while potential risks are understood and managed. They must be reliable and reproducible, tested across multiple campaigns, markets and seasons, always drawing on fresh data to prove their stability. Developing robustness frameworks and metrics (e.g. difference-in-differences analyses) are key to assess incremental change between algo vs non-algo campaigns.

Validating Algorithmic Success with Difference-in-Difference (DiD)

Privacy and Data Governance
A privacy-first approach is essential. Algorithms should be built on aggregated data that protects individuals e.g. by indexing revenue at a geo level. Next comes data quality and integrity; the information fed into an algorithm must be tested for bias, inaccuracies and errors. We should ask whether certain elements e.g. domains or ad formats are over-represented and ensure that campaign outcomes are properly compared with the inputs that shaped them. Accessibility is just as important. Training and live data should be stored securely in cloud environments, with careful attention paid to storage capacity, freshness and availability, so that the system remains both transparent and robust.

Transparency
Transparency must run through every part of the system, from the data we use, to the algorithms themselves. This begins with traceability; data sets should be accessible for auditing, with the ability to explain or analyse them when an algorithm is iterated or retired. It also extends to explainability. AI tools must be understood by humans, and we must recognise the tension between accuracy and clarity: increasing one can often come at the expense of the other.

Communication matters too.

If generative AI is used to create ads, the consumer should be informed. Finally, there is the issue of market dynamics. Algorithms rely on either first-party or third-party data. When vendor data is widely adopted, it can create a red ocean effect, where demand inflates prices for the same signals, shifting the market in ways long predicted by economic theory, reflected in equilibrium models. A solution to this is multi-signal optimisation or what I call ‘signal stacking’, which will find pockets of impressions, uniquely valued with a mix of 1PD/3PD signals, such as modelled CO₂ and attention signals, which benefits advertiser, consumer and the planet.


Ad inventory market equilibrium. CPM = Cost per thousand impressions.

Diversity, Non-Discrimination and Fairness
In advertising, algorithms should enable relevant and personalised experiences. Testing methods like cookie-split A/B experiments are vital to prove effectiveness, with A/B often the clearest way to control seasonal effects.

Bias must be actively managed. Historic imbalances or weak governance can distort results, so it’s critical to assess not only outcomes but also which audiences are reached. Insights should feed back into planning. Fairness also means accessibility – systems should be transparent enough for non-technical teams to follow, with stakeholders engaged across the AI lifecycle.

Societal and Environmental Well-Being
“If attention is the currency of advertising, CO₂ is the change”

We need to monitor the AI supply chain, particularly through cloud-based analyses and deployment. Algorithms should reduce waste by delivering ads in attentive, viewable environments with a lighter carbon footprint. Supply-path tools like ads.txt, app-ads.txt and sellers.json help direct spend toward authorised, direct sellers, cutting intermediaries, lowering infrastructure load and supporting a more transparent and sustainable ecosystem. Optimising for high-attention environments also leads to lower modelled CO₂ per thousand impressions through lower refresh rates within ad slots (Scope3).


Carbon emission data per domain (Source: Scope3)

The social dimension is just as important. AI can boost morale by removing repetitive tasks, unlocking space for critical thinking and creativity – but these effects must be monitored. Societally, we must ask whether our algorithms reinforce echo chambers by targeting the same audiences, or whether they open new pathways for diverse groups to see and engage with our messages.

Accountability
Algorithm data and design should be accessible, and any new model tested on a small share of budget first so that errors can be corrected without jeopardising campaign goals. Negative impacts must also be anticipated and minimised. Red-teaming before and after deployment helps to identify risks to outcomes and ensure problems are caught early. Trade-offs are inevitable: higher CPMs may be justified if they deliver stronger results, while short-term pressures may limit optimisation. Finally, redress matters. We need clear criteria for what success looks like, and if an algorithm falls short it should either be iterated to improve or retired.

To facilitate this process, I’ve created a model card example, which at a fundamental level, helps to manage the biases and maximise success of the algorithm.

The future

The future of the industry is AI, algorithms and agentic platforms. As AI’s adoption increases, we will see more questions around the societal and environmental impact of the technology and must adopt responsible frameworks to consciously advocate, develop and deploy tools. Our industry is bolstered by vendors and first-party data and multi-signal optimisation will become more important for brands to define high-value environments that meet business goals. The challenge isn’t just optimisation. It’s building algorithms that are useful, truthful, and good for society.

Subscribe to our newsletter for updates

Join thousands of media and marketing professionals by signing up for our newsletter.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.

Share

Related Posts

Popular Articles

Featured Posts

Menu