As We Move Into an AI-Enabled World, Privacy Must Lead the Way

Alistair Bastian, CTO of InfoSum, explores how marketers can harness AI’s power while maintaining ethical data practices. As consumer concerns about privacy intensify, he outlines how Privacy Enhancing Technologies (PETs) enable brands to collaborate, train AI models, and extract valuable insights – all without compromising customer trust or data security.

Marketing’s AI revolution is well and truly underway. For use cases such as ad creation, the automation of administrative tasks, and data

Alistair Bastian, CTO at InfoSum

analysis, AI is now a key part of the marketing department’s armoury, with research showing that 61 percent feel that their AI investments are already paying off.

But building AI models requires large volumes of reliable customer data. And the marketing industry doesn’t exactly have a spotless reputation when it comes to protecting this precious asset. Consumers are suspicious of brands’ data practices, with 60 percent believing that their data is routinely misused. And this problem is becoming more acute as AI usage grows.

According to research from the IAPP, 57 percent of people are concerned about the impact of AI on privacy, and many don’t want their data being used to train AI without explicit consent. If data isn’t sourced and managed in an ethical way, brands could face dire consequences.

How Brands Can Take an Ethical Approach to Data

There are several steps brands can take to prioritise ethical data processes. The first is to move away from invasive tracking and opaque data sharing practices of the past. Third-party cookies and dealings with shady data brokers must be abandoned.

Secondly, brands must think about what data they are collecting and who within the organisation has access to it. And as well as asking ‘what’ and ‘who’, they need to be able to justify the ‘why’ as well.

Third, consent mechanisms must be tightened up. Consent dialogues must be concise and completely transparent. If data is being used to train AI models, this should be clear. Consent should never be assumed, so the best approach is to make everything opt-in by default.

Next, brands need to examine their partnerships. With data collaboration now a foundational part of many organisations’ marketing strategies, they must feel confident that their partners have the same respect for customer data as they do.

Finally, it’s critical that organisations choose the right technologies to enable these collaborations – and the safe implementation of AI tools too. Some solutions that use labels such as ‘privacy-first’ or ‘privacy-centric’ only meet bare minimum requirements. Businesses that are serious about taking an ethical approach must opt for technologies with advanced safeguards.

Ensuring Safe Collaboration with PETs

A specific set of technologies and techniques known as PETs (Privacy Enhancing Technologies) can ensure that sensitive data is adequately protected. Marketers should familiarise themselves with what these PETs are and what they do.

PETs function independently, each focused on solving a specific problem. In practice, robust data protection involves layering multiple PETs on top of each other, depending on the use case. Here are some of the most common PETs:

Decentralised Data Processing
Decentralised Data Processing is a technique that means each party in a collaboration can carry out their own data processing on their own premises using their own devices. Data doesn’t have to be shared, co-located, or mixed.

Homomorphic Encryption
Homomorphic Encryption allows the analysis of encrypted data without it having to be decrypted. It’s used in scenarios where organisations want to process data with a partner, and prevents the raw data from ever being exposed.

Differential Privacy
Differential Privacy enables brands to share insights derived from data sets with partners without exposing any personally identifiable information. Randomised ‘noise’ is added to query results in order to make it impossible to identify any individual data points.

Secure Multi-Party Computation
Secure Multi-Party Computation (SMPC) allows partners to compare data sets without exposing sensitive data. For example, SMPC can be used by a brand and a publisher partner to find out where their respective audiences overlap, allowing for better ad targeting.

Synthetic Data
Synthetic Data enables the development and training of machine learning (ML) models, by generating a version of a partner’s data that statistically resembles the real data but does not contain any identifiable or real-world data.

Federated Learning & Federated Analysis
Federated Learning and Federated Analysis have a key role to play in the age of AI-enabled marketing. These are methods of distributed data analysis where insights can be generated without moving or centralising data. They allow multiple parties to collaboratively analyse data or train an ML model, while each partner retains full control of their own data.

Using PETs in Practice

Taking a real-world example of how PETs can be used in practice, many CPG (consumer packaged goods) brands struggle to understand how their upper-funnel ad campaigns are influencing sales as they don’t sell directly to consumers. These brands and their retail partners can use data collaboration based on a combination of PETs, including Decentralised Data Processing, Differential Privacy, and Federated Learning, to analyse ad exposures and sales data to measure the sales uplift without exposing any sensitive data or customer information.

And it isn’t just for one-to-one collaborations where PETs can play a role. Multiple parties – even brands that directly compete with one another – can utilise PETs like Federated Learning and Federated Analysis to collaboratively train models that offer rich consumer intelligence without surrendering any competitive advantage. These models can also incorporate data sets from other sources such as publishers and analytics companies to further bolster the insights available, giving them access to deep predictive intelligence without any risk.

AI-Enabled Marketers Must Have Effective Safeguards in Place

The marketing industry does not have a great reputation for protecting customer data. But with consumers now much more aware of their rights, and evolving privacy legislation, they have to be better. Marketers must put adequate protections in place to ensure their organisational data remains their own and protect customers from any risk, especially when training collaborative models.

It is possible to create an environment that allows effective models to be built and trained while protecting consumers at all times. PETs can ensure that data is secure, pseudonymised, can’t be reverse-engineered, and is never shared or exposed. Layering the right PETs in the right way can give marketers confidence that they are fully respecting their customers, but still allow them to get maximum value from the data they hold.

 

Subscribe to our newsletter for updates

Join thousands of media and marketing professionals by signing up for our newsletter.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.

Share

Related Posts

Popular Articles

Featured Posts

Menu