Thursday, March 19, 2026

“Study Finds 45% of Leading AI Assistants Provide Inaccurate News Content”

Share

Nearly half of responses from leading AI assistants contain inaccuracies in news content, a joint study by the European Broadcasting Union (EBU) and the BBC revealed. The international research examined 3,000 responses from AI assistants, evaluating accuracy, sourcing, and the ability to differentiate between fact and opinion in 14 languages. Notably, 45% of responses had significant issues, with 81% showing some form of problem.

According to the Reuters Institute’s Digital News Report 2025, approximately 7% of online news consumers and 15% of individuals under 25 rely on AI assistants for news. The study reached out to companies like Gemini, Google, OpenAI, and Microsoft for comments on the findings.

Gemini, Google’s AI assistant, expressed a commitment to enhancing its platform based on user feedback. OpenAI and Microsoft acknowledged the issue of “hallucinations” and are working to address them. Perplexity claims a high accuracy rate of 93.9% in its “Deep Research” mode for factual information.

The study also highlighted that a third of AI assistant responses had sourcing errors, with Gemini leading in significant sourcing issues at 72%. Accuracy issues, including outdated information, were found in 20% of responses from all AI assistants studied.

Examples of errors included Gemini providing incorrect information on law changes and ChatGPT erroneously reporting Pope Francis as the current Pope months after his passing. Public-service media organizations from 18 countries participated in the study, emphasizing the growing importance of AI assistants in news consumption.

The EBU emphasized the need for AI companies to enhance their assistants’ news-related responses and uphold accountability to maintain public trust. They called for a level of accountability similar to traditional news organizations, ensuring errors are promptly identified and corrected.

Read more

Local News