Nearly half (45 percent) of news reports from AI assistants contain significant misinformation, a study from the European Broadcasting Union (EBU) revealed.
The study titled News Integrity in AI Assistants, was coordinated by the BBC and released today at the EBU News Assembly in Naples. The information was gathered from journalists in 18 different countries, who tested thousands of responses to questions about the news from popular chatbots ChatGPT, Gemini, Copilot, and Perplexity.
The AI assistants’ responses were evaluated on five criteria: accuracy, sourcing, editorialisation, distinguishing opinion from fact, and context.
The overall findings of the report reveal the amount of ‘significant issues’ in AI responses was down 5 percent from the BBC’s replicated research in February, but many problems around sourcing, incorrect attribution and inaccuracy remain.
Citing the Sources
The main cause of the AI assistants’ issues was sourcing. Some AI assistants were found to falsely attribute incorrect information to specific publications. Google Gemini was found most likely to misreport, with major issues found in 76 percent of its responses and sourcing issues in 72 percent.
For example, the AI assistant referenced Radio France and Wikipedia as sources for an answer to a question about a news event. The response, however, contained no links to the publishers’ sites, as the content mentioned was either from a different publication or did not exist.
Of all the responses, 31 percent had significant sourcing problems–missing, misleading, or false attributions.
Accuracy also proved to be a major issue with the AI responses. Overall, 20 percent of the responses contained inaccuracies, such as hallucinated details or outdated information.
All of the AI assistants made factual errors, ranging from falsely stating surrogacy is prohibited by law in Czechia, to stating Pope Francis was the current leader of the Roman Catholic Church a month after his death.
“Like all the summaries, the AI fails to answer the question with a simple and accurate ‘we don’t know.’” a BBC evaluator said. “It tries to fill the gap with explanation rather than doing what a good journalist would do which is explain the limits of what we know to be true.”
Modern Media Consumption
The danger of misreporting from AI assistants is that more people are relying on these systems to get news. The BBC published separate research which showed that over a third of adults in the UK (42 percent) fully trust AI to provide accurate information.
This research, titled “Audience Use and Perceptions of AI Assistants for News,” also reported that 84 percent of UK adults said a factual error would impact their trust in an AI assistant for news summaries. Some respondents (23 percent) stated they believe news publishers should still take some fault when an AI assistant falsely attributes the publisher with misinformation.
The study also revealed that 54 percent of UK adults worry about the impact of AI on journalism.
“AI assistants mimic journalistic authority – without journalistic rigor.” an analysis from Georgia Public Broadcasting (GPB) said. “However, this masks underlying issues such as lack of source traceability, subtle bias in framing, fabricated or assumed consensus. This creates a dangerous illusion of reliability. Users may not question these outputs – especially if they lack strong media literacy.”



