Risks of Anthropomorphizing Machine Learning

AI Mental Verbs Mislead Public on Capabilities

Iowa State University researchers warn that using human-like language for machines creates false expectations.

By Avantgarde News Desk··1 min read
An illustration of a robot head made of data lines next to the word 'THINK' in blue light.

An illustration of a robot head made of data lines next to the word 'THINK' in blue light.

Photo: Avantgarde News

Researchers at Iowa State University found that human-like language used to describe AI can mislead the public. The study, published in Technical Communication Quarterly, analyzed how media describes machine learning [1]. Using mental verbs such as "think" or "understand" blurs the line between human cognition and data processing [1]. This anthropomorphism may create unrealistic expectations for AI reliability [1]. The research suggests that using terms that imply human emotion or intent hides how AI truly functions [1]. Clearer communication is needed to distinguish machine processing from human thought [1].

Editorial notes

Transparency note

Drafted with LLM; human-edited

AI assisted
Yes
Human review
Yes
Last updated

Risk assessment

High

The story relies on a single source domain (ScienceDaily), failing the requirement for three independent domains.

Sources

Related stories

View all

Topics

Get the weekly briefing

Weekly brief with top stories and market-moving news.

No spam. Unsubscribe anytime. By joining, you agree to our Privacy Policy.

About the author

Avantgarde News Desk covers risks of anthropomorphizing machine learning and editorial analysis for Avantgarde News.

Study: Human-Like Language Misleads Public on AI Capabilities