Improving Benchmarks via Social Interaction
AI Accuracy Increases With Human Social Cues
Researchers in Tokyo found that allowing AI to interrupt or remain silent improves complex reasoning performance.

Two digital AI silhouettes in a modern research setting interact with glowing holographic charts and data points.
Photo: Avantgarde News
Researchers at Tokyo's University of Electro-Communications discovered that AI agents perform better when they mimic human conversational habits [1]. By allowing models to interrupt each other or stay silent, they achieved higher accuracy on the Massive Multitask Language Understanding (MMLU) benchmark [1][2]. This approach replaces the standard rigid turn-taking found in traditional AI interactions [1]. The study suggests that these more natural social cues help AI agents navigate complex reasoning tasks more effectively [1]. Researchers observed significant improvements in how the agents processed information during collaborative problem-solving exercises [2]. This development could lead to more natural and efficient communication between humans and AI systems in the future [1].
Editorial notes
Transparency note
Drafted with LLM; human-edited
- AI assisted
- Yes
- Human review
- Yes
- Last updated
Risk assessment
Risk level set to high because the story relies on fewer than three independent source domains as recommended by internal guidelines.
Sources
- 1.↗
livescience.com
https://www.livescience.com/technology/artificial-intelligence/scientists-made-ai-agents-ruder-and-they-performed-better-at-complex-reasoning-tasks
- 2.↗
thehelper.net
https://www.thehelper.net/threads/scientists-made-ai-agents-ruder-%E2%80%94-and-they-performed-better-at-complex-reasoning-tasks.200595/
Related stories
View allTopics
About the author
Avantgarde News Desk covers improving benchmarks via social interaction and editorial analysis for Avantgarde News.


