The Risk of Laundering Political Rhetoric
Governments Shape AI Chatbot Responses, Nature Study Finds
New research reveals state-coordinated media in training datasets can "launder" rhetoric into objective AI text.
A digital visualization of a neural network where specific data streams are highlighted to show government influence on AI training processes.
Photo: Avantgarde News
A multi-institutional study published in the journal Nature found that governments influence AI chatbot responses through training data [1][2]. Researchers noted that state-coordinated media affects the political neutrality of these models [1][3]. This process effectively "launders" state rhetoric into seemingly objective AI-generated text [1][2].
Models learn from vast web datasets that often include state-sponsored news outlets [2][3]. Because of this, AI tools may inadvertently present government viewpoints as neutral facts [1]. Experts from New York University and UC San Diego led this collaborative investigation [2][3].
The study highlights how state-coordinated data can skew model outputs on sensitive global topics [1][3]. It calls for significantly greater transparency in how AI developers curate their training sets [2]. These findings raise critical concerns about the long-term objectivity of global AI systems [2].
Editorial notes
Transparency note
AI assisted drafting. Human edited and reviewed.
- AI assisted
- Yes
- Human review
- Yes
- Last updated
Risk assessment
The topic involves government influence and potential misinformation.
Sources
- 1.↗
eurekalert.org
https://www.eurekalert.org/news-releases/1127379
- 2.↗
nyu.edu
https://www.nyu.edu/about/news-publications/news/2026/may/governments-may-shape-what-ai-chatbots-say-by-shaping-the-data-t.html
- 3.↗
today.ucsd.edu
https://today.ucsd.edu/story/governments-may-shape-what-ai-chatbots-say-by-shaping-the-web-they-learn-from
Related stories
View allTopics
About the author
Avantgarde News Desk covers the risk of laundering political rhetoric and editorial analysis for Avantgarde News.