Whether it’s been due to copyright infringement, fear of job loss, or fake near – the press industry has been one of the most impacted by AI innovation. In this interview, FutureWeek speaks to Panagiotis Lazaridis, Principal Data Product Manager at The Economist, who shares how the iconic publication’s data team is using AI – improving internal workflows and delivering personalised reader experiences.
How is your data team using AI currently?
The Economist has been working with language models to support the tagging and organisation of our content, a bit before the Generative AI boom; we are fortunate to have an extremely rich content base that enables us to experiment with new ways of classifying and managing our most valuable IP. The advent of commercial LLMs in late 2022/early 2023 unlocked multiple use cases around internal ways of working and efficiency gains; we have seen a lot of spectacular uses of tools that we have either developed or partnered with for things like content and data management, automating repetitive tasks, software development and research.
Within our student app, Espresso – we use AI to translate from English to multiple languages to support multi-lingual subscribers, while we are also introducing transcripts of our podcasts, always of course marked as “AI Generated”. We do a lot with more conventional AI and ML in terms of analytics and data activation. Starting with the user’s consent – as we do nothing without it – we utilise our audience’s data to build better personalised experiences. We do this to bring value to our customers but also as a subscription business to build retention and engagement.
We use tools to recommend to our audience content they might like but also thought-provoking pieces that they would otherwise miss. After all, that is why our readers choose us; to broaden their perspectives; and that is why we are very selective about how and where we do this so as to always be in conjunction with our editorial curation and recommendation of content. I personally always like to think about ‘reach’ in that context. Many organisations fall into the trap of building a model and testing it in a restricted environment – such as on one social channel or email – which isn’t representative of the reality that users engage with content across multiple platforms.
What role does AI have in personalisation?
Personalisation is becoming increasingly more important in our industry and AI is of course the driving force behind that. At The Economist, we believe in transparency , privacy and impact when it comes to such experiences. For example, we are looking at personalisation as a way to augment the work that our editorial teams are doing and by no means shadow or replace it. And if any part of the customer experience is personalised, or any piece of content – such as a summary or a transcript – is AI-generated, we clearly sign-post it. We’ve created new branding and badging that we use to flag where we’ve used AI to support in the conversion, translations or summarisation of a piece of content. Of course on top of that, we have privacy and local regulations in mind.
In terms of how we do it – as a data team, we try to understand the preferences of our customers and then what kind of action needs to take place from the AI to fit the intent of the customer. Typically, we might have seen what type of article a customer reads and then try to find similar topics to share with them. Now with LLMs and natural language processing, we can share topics with readers that don’t necessarily align with what they’re used to reading but have overlap.
AI is becoming more embedded across the full spectrum of our operations – from analytics to customer insights – and helps us determine the next best action for our users to keep them engaged. But as we say at The Economist, AI can improve how we work, not change what we do. Does audience data influence The Economist’s content at all? No, the editorial teams are fully independent and maintain very steep editorial integrity. The journalists decide what is interesting or relevant – of course looking at the data from their perspective too in terms of what our readers enjoy and engage with – and then the commercial teams try to amplify those articles. We certainly have tools that analyse article performance within our data stack, but by no means can we say that these influence content decisions; what we write, and why, lies with the editorial team .
How are you ensuring you’re adhering to privacy and transparency principles?
Privacy is the first thing we think of as a data team. We use as little personal identifiable information (PII) as possible. With user consent, we analyse engagement patterns on our website in a privacy-conscious way, ensuring compliance with all relevant regulations. We know that when we’re using this information we have to be very careful. We are more conservative than the industry average, but we are pretty happy about that. To be honest, if you do your behavioural analysis correctly, you don’t need to use that much personal information from users. We of course want to experiment more and are making sure that we put the right guardrails in place so we follow our principles and values. And we are amplifying the effectiveness of that with our evolving technology infrastructure which encompasses better auditability , observability and monitoring so we are always sure we are compliant with regulations.
What’s your opinion on online articles being used to train LLMs?
We are such an old publication and have so much content – we train our own AI models on this content but we are aware it’s the Wild West out there at the moment when it comes to IP protection. I do feel, however, that similar to when the internet came around and regulations eventually caught up, regulations will catch up to AI too. I think eventually we will reach a point where there’s so much content online – it’s exponentially growing everyday – that the data models train eventually won’t be high-quality. From my perspective, it’s probably useful for AI models to train on high-quality content – like those from major publications – but there needs to be mechanisms or deals put in place to protect intellectual property (IP).
How do you think AI will be used in the editorial industry in the future?
For The Economist, when it comes to customer-facing Gen AI use cases, we are very focused on what can drive value for our customers. Due to the richness of our content, we are leveraging our work on taxonomies to combine with better search and route-to-content functionality, while experimenting with transcripts of our new content formats and translations, to expand reach and bring the value of our journalism to new audiences. It’s important to transparent with the reader about AI-use – for example, if we present an AI summary or translation, we explicitly say so. I do expect a similar direction across the industry.
One thing I’m personally curious to see is how content consumption will evolve and how decisions on content formats might take place off-platform; for example, using a browsers’ embedded AI capabilities to read an article translated in a local language, or even converted into an audio file without the publishers’ control. I also think that now with AI agents, we see even more useful applications. So far, data is being used for decision-support, but with agents one can automate a good chunk of these decisions and make the process more measurable and efficient. I believe we will soon start seeing more and more experimental use cases around intelligent agents, I’m really excited to see where that goes!