January 2025 — AI Misinformation Monitor of Leading AI Chatbots Multilingual Edition

An audit of the 10 leading generative AI tools and their propensity to repeat false narratives on topics in the news, conducted in French, English, German, Italian, Spanish, Russian, and Chinese

Published Feb. 7, 2025

The world’s 10 leading chatbots generate more false claims in Russian, Chinese, and Spanish than in English, with Russian and Chinese registering failure rates (percentage of responses containing false claims or offering a non-response) of over 50 percent, according to a NewsGuard audit conducted in seven languages.

NewsGuard launched a monthly AI News Misinformation Monitor in July 2024, setting a new standard for measuring the accuracy and trustworthiness of the AI industry by tracking how each leading generative AI model is responding to prompts related to significant falsehoods in the news.

The monitor focuses on the 10 leading large-language model chatbots: OpenAI’s ChatGPT-4, You.com’s Smart Assistant, xAI’s Grok, Inflection’s Pi, Mistral’s le Chat, Microsoft’s Copilot, Meta AI, Anthropic’s Claude, Google’s Gemini, and Perplexity’s answer engine. It will expand as needed as other generative AI tools are launched.

Researchers, platforms, advertisers, government agencies, and other institutions interested in accessing the detailed individual monthly reports or who want details about our services for generative AI companies can contact NewsGuard here. And to learn more about NewsGuard’s transparently-sourced datasets for AI platforms, click here.

Download the Report

To download the AI Misinformation Monitor, please fill out your details below and you will be redirected to the report. If you'd like to learn more about working with NewsGuard, email [email protected].

  • By submitting this form, you agree to receive email communications from NewsGuard.
  • This field is for validation purposes and should be left unchanged.