Risks of Anthropomorphizing Machine Learning
AI Mental Verbs Mislead Public on Capabilities
Iowa State University researchers warn that using human-like language for machines creates false expectations.

An illustration of a robot head made of data lines next to the word 'THINK' in blue light.
Photo: Avantgarde News
Researchers at Iowa State University found that human-like language used to describe AI can mislead the public. The study, published in Technical Communication Quarterly, analyzed how media describes machine learning [1]. Using mental verbs such as "think" or "understand" blurs the line between human cognition and data processing [1]. This anthropomorphism may create unrealistic expectations for AI reliability [1]. The research suggests that using terms that imply human emotion or intent hides how AI truly functions [1]. Clearer communication is needed to distinguish machine processing from human thought [1].
Editorial notes
Transparency note
Drafted with LLM; human-edited
- AI assisted
- Yes
- Human review
- Yes
- Last updated
Risk assessment
The story relies on a single source domain (ScienceDaily), failing the requirement for three independent domains.
Sources
Related stories
View allTopics
About the author
Avantgarde News Desk covers risks of anthropomorphizing machine learning and editorial analysis for Avantgarde News.


